diff --git a/data_all_eng_slimpj/shuffled/split2/finalzhmt b/data_all_eng_slimpj/shuffled/split2/finalzhmt new file mode 100644 index 0000000000000000000000000000000000000000..90c021445acafdd6e1cb69ea126274b613de8683 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzhmt @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nAn $n \\times n$ matrix $M$ over a field $\\Fset$ is said to {\\em represent} a digraph $G=(V,E)$ with vertex set $V = \\{1,2,\\ldots,n\\}$ if $M_{i,i} \\neq 0$ for every $i$, and $M_{i,j}=0$ for every distinct $i,j$ such that $(i,j) \\notin E$. The {\\em minrank} of $G$ over $\\Fset$, denoted ${\\mathop{\\mathrm{minrk}}}_\\Fset(G)$, is the minimum possible rank of a matrix $M \\in \\Fset^{n \\times n}$ representing $G$. The definition is naturally extended to (undirected) graphs by replacing every edge with two oppositely directed edges.\nIt is easy to see that for every graph $G$ the minrank parameter is sandwiched between the independence number and the clique cover number, that is, $\\alpha(G) \\leq {\\mathop{\\mathrm{minrk}}}_\\Fset(G) \\leq \\chi(\\overline{G})$.\nFor example, ${\\mathop{\\mathrm{minrk}}}_\\Fset(K_n)=1$ and ${\\mathop{\\mathrm{minrk}}}_\\Fset(\\overline{K_n})=n$ for every field $\\Fset$.\nThe minrank parameter was introduced by Haemers in 1979~\\cite{Haemers79}, and since then has attracted a significant attention motivated by its various applications in information theory and in theoretical computer science (see, e.g.,~\\cite{Haemers81,BBJK06,Valiant92,Riis07,PudlakRS97,HavivL13,ChlamtacH14}).\n\nIn this work we address the extremal behavior of the minrank parameter of $n$-vertex graphs whose complements are free of a fixed forbidden subgraph.\nFor two graphs $G$ and $H$, we say that $G$ is {\\em $H$-free} if $G$ contains no subgraph, induced or not, isomorphic to $H$.\nFor an integer $n$, a graph $H$, and a field $\\Fset$, let $g(n,H,\\Fset)$ denote the maximum of ${\\mathop{\\mathrm{minrk}}}_\\Fset(G)$ taken over all $n$-vertex graphs $G$ whose complement $\\overline{G}$ is $H$-free.\nOur purpose is to study the quantity $g(n,H,\\Fset)$ where $H$ and $\\Fset$ are fixed and $n$ is growing.\n\n\\subsection{Our Contribution}\n\nWe provide bounds on $g(n,H,\\Fset)$ for various graph families and fields.\nWe start with a simple upper bound for a forest $H$.\n\n\\begin{proposition}\\label{prop:forestIntro}\nFor every integer $n$, a field $\\Fset$, and a nontrivial forest $H$ on $h$ vertices,\n\\[g(n,H,\\Fset) \\leq h-1.\\]\nEquality holds whenever $H$ is a tree and $n \\geq h-1$.\n\\end{proposition}\n\nWe next provide a general lower bound on $g(n,H,\\Fset)$ for a graph $H$ and a finite field $\\Fset$.\nTo state it, we need the following notation.\nFor a graph $H$ with $h \\geq 3$ vertices and $f \\geq 3$ edges define $\\gamma(H) = \\frac{h-2}{f-1}$ and $\\gamma_0(H) = \\min_{H'}{\\gamma(H')}$, where the minimum is taken over all subgraphs $H'$ of $H$ with at least $3$ edges.\n\n\\begin{theorem}\\label{thm:IntroComp}\nFor every graph $H$ with at least $3$ edges there exists $c=c(H)>0$ such that for every integer $n$ and a finite field $\\Fset$,\n\\[g(n,H,\\Fset) \\geq c \\cdot \\frac{n^{1-\\gamma_0(H)}}{\\log (n \\cdot |\\Fset|)} .\\]\n\\end{theorem}\n\nNote that for every finite field $\\Fset$, the quantity $g(n,H,\\Fset)$ grows with $n$ if and only if $H$ is not a forest.\nIndeed, if $H$ is a forest then $g(n,H,\\Fset)$ is bounded by some constant by Proposition~\\ref{prop:forestIntro}, whereas otherwise $H$ satisfies $\\gamma_0(H)<1$ and thus, by Theorem~\\ref{thm:IntroComp}, $g(n,H,\\Fset) \\geq \\Omega(n^\\delta)$ for some $\\delta = \\delta(H)>0$.\nNote further that for the case $H=K_3$, which is motivated by a question in information theory (see Section~\\ref{sec:applications}),\nTheorem~\\ref{thm:IntroComp} implies that\n\\begin{eqnarray}\\label{eq:K_3}\ng(n,K_3,\\Fset) \\geq \\Omega \\Big ( \\frac{\\sqrt{n}}{\\log n} \\Big )\n\\end{eqnarray}\nfor every fixed finite field $\\Fset$.\nThis is tight up to a $\\sqrt{\\log n}$ multiplicative term (see Proposition~\\ref{prop:K_3}).\n\nTheorem~\\ref{thm:IntroComp} is proved by a probabilistic argument based on the Lov\\'asz Local Lemma~\\cite{LLL75}.\nThe proof involves an approach of Spencer~\\cite{Spencer77} to lower bounds on off-diagonal Ramsey numbers and a technique of Golovnev, Regev, and Weinstein~\\cite{Golovnev0W17} for estimating the minrank of random graphs.\n\nAs our final result, we show that for every non-bipartite graph $H$ there are $H$-free graphs with low minrank over the real field $\\mathbb{R}$.\n\n\\begin{theorem}\\label{thm:IntroNonBi}\nFor every non-bipartite graph $H$ there exists $\\delta=\\delta(H)>0$ such that for every sufficiently large integer $n$, there exists an $n$-vertex $H$-free graph $G$ such that ${\\mathop{\\mathrm{minrk}}}_\\mathbb{R}(G) \\leq n^{1-\\delta}$.\n\\end{theorem}\n\\noindent\nThis theorem is proved by an explicit construction from the family of generalized Kneser graphs, whose minrank was recently studied in~\\cite{Haviv18}.\nIt is known that every $n$-vertex graph $G$ satisfies\n\\begin{eqnarray}\\label{eq:minrk_comp}\n{\\mathop{\\mathrm{minrk}}}_\\Fset(G) \\cdot {\\mathop{\\mathrm{minrk}}}_\\Fset(\\overline{G}) \\geq n\n\\end{eqnarray}\nfor every field $\\Fset$ (see, e.g.,~\\cite[Remark~2.2]{Peeters96}).\nThis combined with the graphs given in Theorem~\\ref{thm:IntroNonBi} implies the following (explicit) lower bound on $g(n,H,\\mathbb{R})$ for non-bipartite graphs $H$.\n\n\\begin{corollary}\\label{cor:IntroNonBi}\nFor every non-bipartite graph $H$ there exists $\\delta=\\delta(H)>0$ such that for every sufficiently large integer $n$,\n$g(n,H,\\mathbb{R}) \\geq n^{\\delta}$.\n\\end{corollary}\n\\noindent\nAs another application of Theorem~\\ref{thm:IntroNonBi}, we disprove a conjecture of Codenotti, Pudl\\'ak, and Resta~\\cite{CodenottiPR00} motivated by Valiant's approach to circuit lower bounds~\\cite{Valiant77} (see Section~\\ref{sec:applications}).\n\n\n\\subsection{Applications}\\label{sec:applications}\n\nThe study of the quantity $g(n,H,\\Fset)$ is motivated by questions in information theory, circuit complexity, and geometry.\nWe gather here several applications of our results.\n\n\\paragraph{Shannon Capacity.}\nFor an integer $k$ and a graph $G$ on the vertex set $V$, let $G^k$ denote the graph on the vertex set $V^k$ in which two distinct vertices $(u_1,\\ldots,u_k)$ and $(v_1,\\ldots,v_k)$ are adjacent if for every $1 \\leq i \\leq k$ it holds that $u_i$ and $v_i$ are either equal or adjacent in $G$.\nThe Shannon capacity of a graph $G$, introduced by Shannon in 1956~\\cite{Shannon56}, is defined as the limit $c(G) = \\lim_{k \\rightarrow \\infty}{(\\alpha(G^k))^{1\/k}}$.\nThis graph parameter is motivated by information theory, as it measures the zero-error capacity of a noisy communication channel represented by $G$.\nAn upper bound on $c(G)$, known as the Lov\\'asz $\\vartheta$-function, was introduced in~\\cite{Lovasz79}, where it was used to show that the Shannon capacity of the cycle on $5$ vertices satisfies $c(C_5)=\\sqrt{5}$, whereas its independence number is $2$.\nHaemers introduced the minrank parameter in~\\cite{Haemers79,Haemers81} and showed that it forms another upper bound on $c(G)$ and that for certain graphs it is tighter than the $\\vartheta$-function.\nIn general, computing the Shannon capacity of a graph seems to be a very difficult task, and its exact value is not known even for small graphs such as the cycle on $7$ vertices.\n\nThe question of determining the largest possible Shannon capacity of a graph with a given independence number is widely open.\nIn fact, it is not even known if the Shannon capacity of a graph with independence number $2$ can be arbitrarily large~\\cite{AlonPowers02}.\nInterestingly, Erd\\\"{o}s, McEliece, and Taylor~\\cite{ErdosMT71} have shown that this question is closely related to determining an appropriate multicolored Ramsey number, whose study in~\\cite{XiaodongZER04} implies that there exists a graph $G$ with $\\alpha(G)= 2$ and $c(G)> 3.199$.\nA related question, originally asked by Lov\\'asz, is that of determining the maximum possible $\\vartheta$-function of an $n$-vertex graph with independence number $2$. This maximum is known to be $\\Theta(n^{1\/3})$, where the upper bound was proved by Kashin and Konyagin~\\cite{KasKon81,Kon81}, and the lower bound was proved by Alon~\\cite{Alon94} via an explicit construction.\nHere we consider the analogue question of determining the maximum possible minrank, over any fixed finite field $\\Fset$, of an $n$-vertex graph with independence number $2$.\nSince the latter is precisely $g(n,K_3,\\Fset)$, our bound in~\\eqref{eq:K_3} implies that the minrank parameter is weaker than the $\\vartheta$-function with respect to the general upper bounds that they provide on the Shannon capacity of $n$-vertex graphs with independence number $2$.\n\n\n\\paragraph{The Odd Alternating Cycle Conjecture.}\nIn 1977, Valiant~\\cite{Valiant77} proposed the matrix rigidity approach for proving superlinear circuit lower bounds, a major challenge in the area of circuit complexity.\nRoughly speaking, the rigidity of a matrix $M \\in \\Fset^{n \\times n}$ for a constant $\\epsilon>0$ is the minimum number of entries that one has to change in $M$ in order to reduce its rank over $\\Fset$ to at most $\\epsilon \\cdot n$. Valiant showed in~\\cite{Valiant77} that matrices with large rigidity can be used to obtain superlinear lower bounds on the size of logarithmic depth arithmetic circuits computing linear transformations.\nWith this motivation, Codenotti, Pudl\\'ak, and Resta~\\cite{CodenottiPR00} raised in the late nineties the Odd Alternating Cycle Conjecture stated below, and proved that it implies, if true, that certain explicit circulant matrices have superlinear rigidity.\nBy an alternating odd cycle we refer to a digraph which forms a cycle when the orientation of the edges is ignored, and such that the orientation of the edges alternates with one exception.\n\\begin{conjecture}[The Odd Alternating Cycle Conjecture~\\cite{CodenottiPR00}]\\label{conj:alternating}\nFor every field $\\Fset$ there exist $\\epsilon >0$ and an odd integer $\\ell$ such that every $n$-vertex digraph $G$ with ${\\mathop{\\mathrm{minrk}}}_\\Fset (G) \\leq \\epsilon \\cdot n$ contains an alternating cycle of length $\\ell$.\n\\end{conjecture}\n\nCodenotti et al.~\\cite{CodenottiPR00} proved that the statement of Conjecture~\\ref{conj:alternating} does not hold for $\\ell=3$ over any field $\\Fset$. Specifically, they provided an explicit construction of $n$-vertex digraphs $G$, free of alternating triangles, with ${\\mathop{\\mathrm{minrk}}}_\\Fset (G) \\leq O(n^{2\/3})$ for every field $\\Fset$. For the undirected case, which is of more interest to us, a construction of~\\cite{CodenottiPR00} implies that there are $n$-vertex triangle-free graphs $G$ such that ${\\mathop{\\mathrm{minrk}}}_\\Fset(G) \\leq O(n^{3\/4})$ for every field $\\Fset$ (see~\\cite[Section~4.2]{BlasiakKL13} for a related construction over the binary field as well as for an application of such graphs from the area of index coding). Note that this yields, by~\\eqref{eq:minrk_comp}, that $g(n,K_3,\\Fset) \\geq \\Omega(n^{1\/4})$.\nIn contrast, for the real field and the cycle on $4$ vertices, it was shown in~\\cite{CodenottiPR00} that every $n$-vertex $C_4$-free graph $G$ satisfies ${\\mathop{\\mathrm{minrk}}}_\\mathbb{R} (G) > \\frac{n}{6}$.\nYet, the question whether every $n$-vertex digraph with sublinear minrank contains an alternating cycle of odd length $\\ell \\geq 5$ was left open in~\\cite{CodenottiPR00} for every field.\nOur Theorem~\\ref{thm:IntroNonBi} implies that for every odd $\\ell$ there are (undirected) $C_\\ell$-free graphs $G$ with sublinear ${\\mathop{\\mathrm{minrk}}}_\\mathbb{R}(G)$, and in particular disproves Conjecture~\\ref{conj:alternating} for the real field $\\mathbb{R}$.\n\n\\paragraph{Nearly Orthogonal Systems of Vectors.}\nA system of nonzero vectors in $\\mathbb{R}^m$ is said to be nearly orthogonal if any set of three vectors of the system contains an orthogonal pair.\nIt was proved by Rosenfeld~\\cite{Rosenfeld91} that every such system has size at most $2m$.\nAn equivalent way to state this, is that every $n$-vertex graph represented by a real positive semidefinite matrix of rank smaller than $\\frac{n}{2}$ contains a triangle.\nNote that the positive semidefiniteness assumption is essential in this result, as follows from the aforementioned construction of~\\cite{CodenottiPR00} of $n$-vertex triangle-free graphs $G$ with ${\\mathop{\\mathrm{minrk}}}_\\mathbb{R}(G) \\leq O(n^{3\/4})$.\n\nA related question was posed by Pudl\\'ak in~\\cite{Pudlak02}.\nHe proved there that for some $\\epsilon >0$, every $n$-vertex graph represented by a real positive semidefinite matrix of rank at most $\\epsilon \\cdot n$ contains a cycle of length $5$. Pudl\\'ak asked whether the assumption that the matrix is positive semidefinite can be omitted.\nOur Theorem~\\ref{thm:IntroNonBi} applied to $H = C_5$ implies that there are $C_5$-free graphs $G$ with sublinear ${\\mathop{\\mathrm{minrk}}}_\\mathbb{R}(G)$, and thus answers this question in the negative.\n\n\\subsection{Outline}\nThe rest of the paper is organized as follows.\nIn Section~\\ref{sec:forest} we present the simple proof of Proposition~\\ref{prop:forestIntro}.\nIn Section~\\ref{sec:g_comp} we provide some background on sparse-base matrices from~\\cite{Golovnev0W17} and then prove Theorem~\\ref{thm:IntroComp}.\nIn the final Section~\\ref{sec:non-bip}, we prove Theorem~\\ref{thm:IntroNonBi}.\n\n\\section{Forests}\\label{sec:forest}\n\nIn this section we prove Proposition~\\ref{prop:forestIntro}.\nWe use an argument from one of the proofs in~\\cite{AlonKS05}.\n\n\\begin{proof}[ of Proposition~\\ref{prop:forestIntro}]\nFix a nontrivial $h$-vertex forest $H$ and a field $\\Fset$.\nIt suffices to consider the case where $H$ is a tree, as otherwise $H$ is a subgraph of some $h$-vertex tree $H'$, and since every $H$-free graph is also $H'$-free, we have $g(n,H,\\Fset) \\leq g(n,H',\\Fset)$.\n\nOur goal is to show that every $n$-vertex graph $G$ whose complement $\\overline{G}$ is $H$-free satisfies ${\\mathop{\\mathrm{minrk}}}_\\Fset(G) \\leq h-1$.\nLet $G$ be such a graph.\nWe claim that $\\overline{G}$ is $(h-2)$-degenerate, that is, every subgraph of $\\overline{G}$ contains a vertex of degree at most $h-2$. Indeed, otherwise $\\overline{G}$ has a subgraph $G'$ all of whose degrees are at least $h-1$, and one can find a copy of $H$ in $G'$ as follows: First identify an arbitrary vertex of $G'$ with an arbitrary vertex of $H$, and then iteratively identify a vertex of $G'$ with a leaf added to the being constructed copy of the tree $H$. The process succeeds since $H$ has $h$ vertices and every vertex of $G'$ has degree at least $h-1$.\nAs is well known, the fact that $\\overline{G}$ is $(h-2)$-degenerate implies that $\\overline{G}$ is $(h-1)$-colorable, so we get that ${\\mathop{\\mathrm{minrk}}}_\\Fset(G) \\leq \\chi(\\overline{G}) \\leq h-1$, as required.\n\nWe finally observe that the bound is tight whenever $H$ is a tree and $n \\geq h-1$.\nIndeed, let $G$ be the $n$-vertex complete $\\lceil \\frac{n}{h-1} \\rceil$-partite graph, that has $h-1$ vertices in each of its parts, except possibly one of them.\nIts complement $\\overline{G}$ is a disjoint union of cliques, each of size at most $h-1$, and is thus $H$-free.\nSince $\\alpha(G) = \\chi(\\overline{G})=h-1$, it follows that ${\\mathop{\\mathrm{minrk}}}_\\Fset(G) = h-1$ for every field $\\Fset$, completing the proof.\n\\end{proof}\n\n\\section{A General Lower Bound on $g(n,H,\\Fset)$}\\label{sec:g_comp}\n\nIn this section we prove Theorem~\\ref{thm:IntroComp} and discuss its tightness for $H=K_3$.\nWe start with some needed preparations.\n\n\\subsection{Lov\\'{a}sz Local Lemma}\n\nThe Lov\\'{a}sz Local Lemma~\\cite{LLL75} stated below is a powerful probabilistic tool in Combinatorics (see, e.g.,~\\cite[Chapter~5]{AlonS16}).\nWe denote by $[N]$ the set of integers from $1$ to $N$.\n\n\\begin{lemma}\\label{lemma:lll}[Lov\\'{a}sz Local Lemma~\\cite{LLL75}]\nLet $A_1,\\ldots, A_N$ be events in an arbitrary probability space.\nA digraph $D = (V,E)$ on the vertex set $V = [N]$ is called a dependency digraph for the events $A_1,\\ldots, A_N$ if for every $i \\in [N]$, the event $A_i$ is mutually independent of the events $A_j$ with $j \\neq i$ and $(i,j) \\notin E$.\nSuppose that $D=(V,E)$ is a dependency digraph for the above events and suppose that there are real numbers $x_1,\\ldots,x_N \\in [0,1)$ such that \\[\\Prob{}{A_i} \\leq x_i \\cdot \\prod_{(i,j) \\in E}{(1-x_j)}\\] for all $i \\in [N]$.\nThen, with positive probability no event $A_i$ holds.\n\\end{lemma}\n\n\\subsection{Sparse-base Matrices}\\label{sec:GRW}\n\nHere we review several notions and lemmas due to Golovnev, Regev, and Weinstein~\\cite{Golovnev0W17}.\nFor a matrix $M$ over a field $\\Fset$, let $s(M)$ denote its sparsity, that is, the number of its nonzero entries.\nWe say that a matrix $M$ over $\\Fset$ with rank $k$ contains an $\\ell$-sparse column (row) basis if $M$ contains $k$ linearly independent columns (rows) with a total of at most $\\ell$ nonzero entries.\nWe first state a lemma that provides an upper bound on the number of matrices with sparse column and row bases.\n\n\\begin{lemma}[\\cite{Golovnev0W17}]\\label{lemma:size_M}\nThe number of rank $k$ matrices in $\\Fset^{n \\times n}$ that contain $\\ell$-sparse column and row bases is at most $(n \\cdot |\\Fset|)^{6\\ell}$.\n\\end{lemma}\n\nThe following lemma relates the sparsity of a matrix with nonzero entries on the main diagonal to its rank.\n\n\\begin{lemma}[\\cite{Golovnev0W17}]\\label{lemma:sparsity_M}\nFor every rank $k$ matrix $M \\in \\Fset^{n \\times n}$ with nonzero entries on the main diagonal,\n\\[s(M) \\geq \\frac{n^2}{4k}.\\]\n\\end{lemma}\n\nWe also need the following notion. An {\\em $(n,k,s,\\ell)$-matrix} over a field $\\Fset$ is a matrix in $\\Fset^{n \\times n}$ of rank $k$ and sparsity $s$ that contains $\\ell$-sparse column and row bases and has nonzero entries on the main diagonal. Note that by Lemma~\\ref{lemma:sparsity_M}, an $(n,k,s,\\ell)$-matrix exists only if $s \\geq \\frac{n^2}{4k}$.\nFor integers $n,k,s'$ and a field $\\Fset$ (which will always be clear from the context), let ${\\cal M}_{n,k}^{(s')}$ be the collection that consists of all $(n',k',s',\\frac{2s'k'}{n'})$-matrices over $\\Fset$ for all $n' \\in [n]$ and $k' \\in [k]$ such that $\\frac{k'}{n'} \\leq \\frac{k}{n}$.\nThis collection is motivated by the following lemma.\n\n\\begin{lemma}[\\cite{Golovnev0W17}]\\label{lemma:M->M'}\nEvery matrix in $\\Fset^{n \\times n}$ with rank at most $k$ and nonzero entries on the main diagonal has a principal sub-matrix that lies in ${\\cal M}_{n,k}^{(s')}$ for some $s'$.\n\\end{lemma}\n\nNow, for integers $n,k,s'$, let ${\\cal P}_{n,k}^{(s')}$ be the collection that consists of all pairs $(M,R)$ such that, for some $n' \\in [n]$, $M$ is an $n' \\times n'$ matrix in ${\\cal M}_{n,k}^{(s')}$ and $R$ is an $n'$-subset of $[n]$.\nObserve that Lemma~\\ref{lemma:M->M'} implies that for every digraph $G$ on the vertex set $[n]$ with ${\\mathop{\\mathrm{minrk}}}_\\Fset(G) \\leq k$ there exist $s'$ and a pair $(M,R)$ in ${\\cal P}_{n,k}^{(s')}$ such that $M$ represents the induced subgraph $G[R]$ of $G$ on $R$, with respect to the natural order of the vertices in $R$ (from smallest to largest).\n\nThe following lemma provides an upper bound on the size of ${\\cal P}_{n,k}^{(s')}$.\n\n\\begin{lemma}\\label{lemma:size_P}\nFor every integers $n,k,s'$, $|{\\cal P}_{n,k}^{(s')}| \\leq (n \\cdot |\\Fset|)^{24s'k\/n}$.\n\\end{lemma}\n\n\\begin{proof}\nTo bound the size of ${\\cal P}_{n,k}^{(s')}$, we consider for every $n' \\in [n]$ and $k' \\in [k]$ such that $\\frac{k'}{n'} \\leq \\frac{k}{n}$ the pairs $(M,R)$ where $M$ is an $(n',k',s',\\frac{2s'k'}{n'})$-matrix and $R$ is an $n'$-subset of $[n]$.\nBy Lemma~\\ref{lemma:size_M} there are at most $(n' \\cdot |\\Fset|)^{12s'k'\/n'}$ such matrices $M$, each of which occurs in $n \\choose {n'}$ pairs of ${\\cal P}_{n,k}^{(s')}$. It follows that\n\\begin{eqnarray*}\n|{\\cal P}_{n,k}^{(s')}| & \\leq & \\sum_{n',k'}{ {n \\choose n'} \\cdot (n' \\cdot |\\Fset|)^{12s'k'\/n'}}\n \\leq n^2 \\cdot \\max_{n',k'} \\big ( n^{n'} \\cdot (n' \\cdot |\\Fset|)^{12s'k'\/n'} \\big ) \\\\\n& \\leq & \\max_{n',k'} \\big ( n^{3n'} \\cdot (n' \\cdot |\\Fset|)^{12s'k'\/n'} \\big )\n \\leq \\max_{n',k'} \\big ( (n \\cdot |\\Fset|)^{3n'+12s'k'\/n'} \\big )\\\\\n& \\leq & \\max_{n',k'} \\big ( (n \\cdot |\\Fset|)^{12s'k'\/n' +12s'k'\/n'} \\big )\n\\leq (n \\cdot |\\Fset|)^{24s'k\/n},\n\\end{eqnarray*}\nwhere in the fifth inequality we have used the relation $s' \\geq \\frac{n'^2}{4k'}$ from Lemma~\\ref{lemma:sparsity_M}, and in the sixth we have used $\\frac{k'}{n'} \\leq \\frac{k}{n}$.\n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{thm:IntroComp}}\n\nWe prove the following theorem and then derive Theorem~\\ref{thm:IntroComp}.\nRecall that for a graph $H$ with $h \\geq 3$ vertices and $f \\geq 3$ edges, we denote $\\gamma(H) = \\frac{h-2}{f-1}$.\nWe also let $\\exp(x)$ stand for $e^x$.\n\n\\begin{theorem}\\label{thm:Comp}\nFor every graph $H$ with at least $3$ edges there exists $c=c(H)>0$ such that for every integer $n$ and a finite field $\\Fset$,\n\\[g(n,H,\\Fset) \\geq c \\cdot \\frac{n^{1-\\gamma(H)}}{\\log (n \\cdot |\\Fset|)} .\\]\n\\end{theorem}\n\n\\begin{proof}\nFix a graph $H$ with $h \\geq 3$ vertices and $f \\geq 3$ edges and denote $\\gamma = \\gamma(H) = \\frac{h-2}{f-1} > 0$.\nThe proof is via the probabilistic method. Let $\\vec{G} \\sim \\vec{G}(n,p)$ be a random digraph on the vertex set $[n]$ where each directed edge is taken randomly and independently with probability $p$. Set $q=1-p$.\nLet $G$ be the (undirected) graph on $[n]$ in which two distinct vertices $i,j$ are adjacent if both the directed edges $(i,j)$ and $(j,i)$ are included in $\\vec{G}$. Notice that every two distinct vertices are adjacent in $G$ with probability $p^2$ independently of the adjacencies between other vertex pairs.\n\nTo prove the theorem, we will show that for a certain choice of $p$ the random graph $G$ satisfies with positive probability that its complement $\\overline{G}$ is $H$-free and that ${\\mathop{\\mathrm{minrk}}}_{\\Fset}(G) > k$, where\n\\begin{eqnarray}\\label{eq:k}\nk = c_1 \\cdot \\frac{n^{1-\\gamma}}{\\ln{(n \\cdot |\\Fset|)}}\n\\end{eqnarray}\nfor a constant $c_1>0$ that depends only on $H$.\nTo do so, we define two families of events as follows.\n\nFirst, for every set $I \\subseteq [n]$ of size $|I|=h$, let $A_I$ be the event that the induced subgraph of $\\overline{G}$ on $I$ contains a copy of $H$. Observe that\n\\[\\Prob{}{A_I} \\leq h! \\cdot (1-p^2)^f = h! \\cdot (1-(1-q)^2)^f \\leq h! \\cdot (2q)^f.\\]\n\nSecond, consider the collection ${\\cal P} = \\cup_{s' \\in [n^2]}{{\\cal P}_{n,k}^{(s')}}$ (see Section~\\ref{sec:GRW}).\nRecall that every element of ${\\cal P}$ is a pair $(M,R)$ such that, for some $n' \\in [n]$, $M$ is an $n' \\times n'$ matrix over $\\Fset$ and $R$ is an $n'$-subset of $[n]$.\nDenote $N_{s'} = |{\\cal P}_{n,k}^{(s')}|$.\nBy Lemma~\\ref{lemma:size_P}, combined with~\\eqref{eq:k}, we have\n\\begin{eqnarray}\\label{eq:N_s'}\nN_{s'} \\leq (n \\cdot |\\Fset|)^{24s'k\/n} = \\exp(24c_1 \\cdot s' \\cdot n^{-\\gamma}).\n\\end{eqnarray}\nLet ${\\cal S} = \\{s' \\in [n^2] \\mid N_{s'} \\geq 1\\}$.\nBy Lemma~\\ref{lemma:sparsity_M}, for every $s' \\in {\\cal S}$ and an $n' \\times n'$ matrix of rank $k'$ in ${\\cal M}_{n,k}^{(s')}$ where $n' \\in [n]$, $k' \\in [k]$, and $\\frac{k'}{n'} \\leq \\frac{k}{n}$, we have that\n\\begin{eqnarray}\\label{eq:s'}\ns' \\geq \\frac{n'}{4} \\cdot \\frac{n'}{k'} \\geq \\frac{n'}{4} \\cdot \\frac{n}{k} = n' \\cdot \\frac{n^{\\gamma} \\cdot \\ln (n \\cdot |\\Fset|)}{4c_1}. \\end{eqnarray}\nNow, for every pair $(M,R) \\in {\\cal P}$, let $B_{M,R}$ be the event that the matrix $M$ represents over $\\Fset$ the induced subgraph $\\vec{G}[R]$ of $\\vec{G}$ on $R$ with respect to the natural order of the vertices in $R$.\nFor $M$ to represent $\\vec{G}[R]$ we require that for every distinct $i, j$ such that $M_{i,j} \\neq 0$, there is an edge in $\\vec{G}$ from the $i$th to the $j$th vertex of $R$. Hence, for $M \\in \\Fset^{n' \\times n'}$ of sparsity $s'$ and an $n'$-subset $R$ of $[n]$,\n\\[\\Prob{}{B_{M,R}} = p^{s'-n'} \\leq p^{s'\/2} = (1-q)^{s'\/2} \\leq \\exp(-qs'\/2),\\] where for the first inequality we have used the inequality $s' \\geq 2n'$ which follows from~\\eqref{eq:s'} for every sufficiently large $n$.\n\nWe claim that it suffices to prove that with positive probability none of the events $A_I$ and $B_{M,R}$ holds.\nIndeed, this implies that there exists an $n$-vertex digraph $\\vec{G}$ that does not satisfy any of these events.\nSince the $A_I$'s are not satisfied it immediately follows that the complement $\\overline{G}$ of the (undirected) graph $G$ associated with $\\vec{G}$ is $H$-free.\nWe further claim that ${\\mathop{\\mathrm{minrk}}}_\\Fset (G) > k$.\nTo see this, assume by contradiction that there exists a matrix $M \\in \\Fset^{n \\times n}$ of rank at most $k$ that represents $G$, and thus, in particular, represents $\\vec{G}$.\nBy Lemma~\\ref{lemma:M->M'}, such an $M$ has a principal $n' \\times n'$ sub-matrix $M' \\in {\\cal M}_{n,k}^{(s')}$ for some $n'$ and $s'$.\nHence, for some $n'$-subset $R$ of $[n]$, the matrix $M'$ represents $\\vec{G}[R]$ with respect to the natural order of the vertices in $R$, in contradiction to the fact that the event $B_{M',R}$ with $(M',R) \\in {\\cal P}$ does not hold.\n\nTo prove that with positive probability none of the events $A_I$ and $B_{M,R}$ holds, we apply the Lov\\'{a}sz Local Lemma (Lemma~\\ref{lemma:lll}).\nTo this end, construct a (symmetric) dependency digraph $D=(V,E)$ whose vertices represent all the events $A_I$ and $B_{M,R}$, and whose edges are defined as follows.\n\\begin{itemize}\n \\item An $A_I$-vertex and an $A_{I'}$-vertex are joined by edges (in both directions) if $|I \\cap I'| \\geq 2$. Notice that the events $A_I$ and $A_{I'}$ are independent when $|I \\cap I'| < 2$.\n \\item An $A_I$-vertex and a $B_{M,R}$-vertex are joined by edges if there are distinct $i,j \\in I \\cap R$ for which the entry of $M$ that corresponds to the edge $(i,j)$ is nonzero. Notice that the events $A_I$ and $B_{M,R}$ are independent when such $i$ and $j$ do not exist.\n \\item Every two distinct $B_{M,R}$-vertices are joined by edges.\n\\end{itemize}\nClearly, each event is mutually independent of all other events besides those adjacent to it in $D$, and thus $D$ is a dependency digraph for our events.\nObserve that every $A_I$-vertex is adjacent to at most ${h \\choose 2} \\cdot {n \\choose {h-2}} \\leq {h \\choose 2} \\cdot n^{h-2}$ $A_{I'}$-vertices.\nAdditionally, every $B_{M,R}$-vertex, where $M$ is an $n' \\times n'$ matrix of sparsity $s'$, is adjacent to at most $(s'-n') \\cdot {n \\choose {h-2}} < s' \\cdot n^{h-2}$ $A_{I}$-vertices.\nFinally, every vertex of $D$ is adjacent to at most $N_{s'}$ $B_{M,R}$-vertices with $M \\in {\\cal M}_{n,k}^{(s')}$ (that is, $s(M) = s'$).\n\nTo apply Lemma~\\ref{lemma:lll} we assign a number in $[0,1)$ to each vertex of $D$.\nDefine\n\\[ q = c_2 \\cdot n^{-\\gamma},~~~~x = c_3 \\cdot n^{-\\gamma \\cdot f},~~~~\\mbox{and}~~~~x_{s'} = \\exp(-c_4 \\cdot s' \\cdot n^{-\\gamma})~~~~\\mbox{for every $s' \\in {\\cal S}$},\\]\nwhere $c_2,c_3,c_4>0$ are constants, depending only on $H$, to be determined.\nWe assign the number $x$ to every $A_I$-vertex, and the number $x_{s'}$ to every $B_{M,R}$-vertex with $s(M)=s'$.\nWe present now the conditions of Lemma~\\ref{lemma:lll}.\nFor every $A_I$-vertex, recalling that $\\Prob{}{A_I} \\leq h! \\cdot (2q)^f$, we require\n\\begin{eqnarray}\\label{eq:lll_A}\nh! \\cdot (2q)^f \\leq x \\cdot (1-x)^{{h \\choose 2} \\cdot n^{h-2}} \\cdot \\prod_{s' \\in {\\cal S}}{(1-x_{s'})^{N_{s'}}}.\n\\end{eqnarray}\nSimilarly, for every $B_{M,R}$-vertex with $s(M)=s'$, recalling that $\\Prob{}{B_{M,R}} \\leq \\exp(-qs'\/2)$, we require\n\\begin{eqnarray}\\label{eq:lll_B}\n\\exp(-qs'\/2) \\leq x_{s'} \\cdot (1-x)^{s' \\cdot n^{h-2}} \\cdot \\prod_{s' \\in {\\cal S}}{(1-x_{s'})^{N_{s'}}}.\n\\end{eqnarray}\n\nTo complete the proof, it suffices to show that the constants $c_1,c_2,c_3,c_4>0$ can be chosen in a way that satisfies the inequalities~\\eqref{eq:lll_A} and~\\eqref{eq:lll_B}. Consider the following three constraints:\n\\begin{enumerate}\n \\item\\label{itm:1} $c_2 > 2 \\cdot (2c_3+c_4)$,\n \\item\\label{itm:2} $c_3 \\geq h! \\cdot (2c_2)^f \\cdot \\exp(3)$, and\n \\item\\label{itm:3} $c_4 \\geq 32 \\cdot c_1$.\n\\end{enumerate}\nIt is easy to see that it is possible to choose the constants under the above constraints. Indeed, by $f \\geq 3$, for a sufficiently small choice of $c_2>0$ one can take $c_3$ with, say, an equality in Item~\\ref{itm:2} so that some $c_4>0$ satisfies Item~\\ref{itm:1}. Then, $c_1$ can be chosen as a positive constant satisfying Item~\\ref{itm:3}.\nWe show now that such a choice satisfies~\\eqref{eq:lll_A} and~\\eqref{eq:lll_B} for every sufficiently large $n$. Note that we use below several times the inequality $1-\\alpha \\geq \\exp(-2\\alpha)$, which holds for any $\\alpha \\in [0,1\/2]$.\n\nFirst, use~\\eqref{eq:N_s'} and the condition $c_4 \\geq 32 \\cdot c_1$ to obtain that\n\\[ \\sum_{s' \\in {\\cal S}}{x_{s'} \\cdot N_{s'}} \\leq \\sum_{s' \\in {\\cal S}}{\\exp((24c_1-c_4) \\cdot s' \\cdot n^{-\\gamma})} \\leq \\sum_{s' \\in {\\cal S}}{\\exp(-8c_1\\cdot s' \\cdot n^{-\\gamma})} \\leq\n\\sum_{s' \\in {\\cal S}}{\\exp(-2\\ln n)} \\leq 1, \\]\nwhere the third inequality follows by $s' \\geq \\frac{ n^{\\gamma} \\cdot \\ln (n \\cdot |\\Fset|)}{4c_1}$ which we get from~\\eqref{eq:s'}, and the fourth by $| {\\cal S}| \\leq n^2$.\nConsidering the term $\\prod_{s' \\in {\\cal S}}{(1-x_{s'})^{N_{s'}}}$, which appears in both~\\eqref{eq:lll_A} and~\\eqref{eq:lll_B},\nwe derive that\n\\[ \\prod_{s' \\in {\\cal S}}{(1-x_{s'})^{N_{s'}}} \\geq \\prod_{s' \\in {\\cal S}}{\\exp(-2x_{s'} \\cdot N_{s'})} = \\exp \\Big (-2 \\cdot \\sum_{s' \\in {\\cal S}}{x_{s'} \\cdot N_{s'}} \\Big ) \\geq \\exp(-2).\\]\nFor inequality~\\eqref{eq:lll_A}, observe that\n\\begin{eqnarray*}\nx \\cdot (1-x)^{{h \\choose 2} \\cdot n^{h-2}} \\cdot \\prod_{s' \\in {\\cal S}}{(1-x_{s'})^{N_{s'}}}\n& \\geq & x \\cdot \\exp \\Big ( -2x \\cdot {h \\choose 2} \\cdot n^{h-2} \\Big ) \\cdot \\exp(-2) \\\\\n& = & c_3 \\cdot n^{-\\gamma \\cdot f} \\cdot \\exp \\Big (-2c_3 \\cdot n^{-\\gamma \\cdot f} \\cdot {h \\choose 2} \\cdot n^{h-2} -2 \\Big) \\\\\n& \\geq & h! \\cdot (2c_2)^f \\cdot n^{-\\gamma \\cdot f} \\cdot \\exp \\Big(1-2c_3 \\cdot {h \\choose 2} \\cdot n^{-\\gamma} \\Big) \\\\\n& \\geq & h! \\cdot (2q)^f,\n\\end{eqnarray*}\nwhere for the second inequality we use $c_3 \\geq h! \\cdot (2c_2)^f \\cdot \\exp(3)$ and $\\gamma = \\frac{h-2}{f-1}$,\nand for the third we use the assumption that $n$ is sufficiently large.\nFor inequality~\\eqref{eq:lll_B}, observe that\n\\begin{eqnarray*}\nx_{s'} \\cdot (1-x)^{s' \\cdot n^{h-2}} \\cdot \\prod_{s' \\in {\\cal S}}{(1-x_{s'})^{N_{s'}}}\n& \\geq & x_{s'} \\cdot \\exp (-2x \\cdot s' \\cdot n^{h-2} ) \\cdot \\exp (-2) \\\\\n& = & \\exp(-c_4 \\cdot s' \\cdot n^{-\\gamma}) \\cdot \\exp (-2 c_3 \\cdot n^{-\\gamma \\cdot f} \\cdot s' \\cdot n^{h-2} ) \\cdot \\exp (-2) \\\\\n& = & \\exp ( -(2c_3+c_4) \\cdot s' \\cdot n^{-\\gamma} -2) \\\\\n& \\geq & \\exp (-(c_2\/2) \\cdot s' \\cdot n^{-\\gamma}) \\\\\n& = & \\exp (-qs'\/2),\n\\end{eqnarray*}\nwhere for the second equality we again use the definition of $\\gamma$, and for the second inequality we use the condition $c_2 > 2 \\cdot (2c_3+c_4)$, the fact that $s' \\cdot n^{-\\gamma} = \\omega(1)$ by~\\eqref{eq:s'}, and the assumption that $n$ is sufficiently large. This completes the proof.\n\\end{proof}\n\nWe can derive now Theorem~\\ref{thm:IntroComp}. Recall that $\\gamma_0(H) = \\min_{H'}{\\gamma(H')}$, where the minimum is over all subgraphs $H'$ of $H$ with at least $3$ edges.\n\n\\begin{proof}[ of Theorem~\\ref{thm:IntroComp}]\nFor a graph $H$ with $h \\geq 3$ vertices and $f \\geq 3$ edges, let $H'$ be a subgraph of $H$ with at least $3$ edges such that $\\gamma_0(H) = \\gamma(H')$.\nBy Theorem~\\ref{thm:Comp} there exists $c>0$ such that\n\\[g(n,H',\\Fset) \\geq c \\cdot \\frac{n^{1-\\gamma_0(H)}}{\\log (n \\cdot |\\Fset|)}\\]\nfor every integer $n$ and a finite field $\\Fset$. Since every $H'$-free graph is also $H$-free, it follows that $g(n,H,\\Fset) \\geq g(n,H',\\Fset)$ and we are done.\n\\end{proof}\n\n\\subsection{The Minrank of Graphs with Small Independence Number}\n\nFor an integer $t \\geq 3$, $g(n,K_t,\\Fset)$ is the maximum possible minrank over $\\Fset$ of an $n$-vertex graph with independence number smaller than $t$. For this case we derive the following corollary.\n\\begin{corollary}\\label{cor:K_t}\nFor every $t \\geq 3$ there exists $c=c(t)>0$ such that for every integer $n$ and a finite field $\\Fset$,\n\\[g(n,K_t,\\Fset) \\geq c \\cdot \\frac{n^{1-\\frac{2}{t+1}}}{\\log (n \\cdot |\\Fset|)} .\\]\n\\end{corollary}\n\n\\begin{proof}\nApply Theorem~\\ref{thm:IntroComp} to the graph $H = K_t$, and notice that $\\gamma_0(K_t) = \\gamma(K_t) = \\frac{t-2}{{t \\choose 2}-1} = \\frac{2}{t+1}$.\n\\end{proof}\n\nFor $H=K_3$, we observe that our lower bound on $g(n,K_3,\\Fset)$ is nearly tight.\n\\begin{proposition}\\label{prop:K_3}\nThere exist constants $c_1,c_2>0$ such that for every integer $n$ and a finite field $\\Fset$,\n\\[ c_1 \\cdot \\frac{\\sqrt{n}}{\\log (n \\cdot |\\Fset|)} \\leq g(n,K_3,\\Fset) \\leq c_2 \\cdot \\sqrt{\\frac{n}{\\log n}}.\\]\n\\end{proposition}\n\n\\begin{proof}\nFor the lower bound apply Corollary~\\ref{cor:K_t} with $t=3$.\nTo prove the upper bound we need a result of Ajtai et al.~\\cite{AjtaiKS80} which says that every triangle-free $n$-vertex graph has an independent set of size $\\Omega(\\sqrt{n \\cdot \\log n})$. By repeatedly omitting such independent sets it follows that the chromatic number of such a graph is $O(\\sqrt{n \/ \\log n})$.\nNow, let $G$ be an $n$-vertex graph whose complement $\\overline{G}$ is triangle-free.\nWe get that ${\\mathop{\\mathrm{minrk}}}_{\\Fset}(G) \\leq \\chi(\\overline{G}) \\leq O(\\sqrt{n\/\\log n})$, as required.\n\\end{proof}\n\n\\section{Non-bipartite Graphs}\\label{sec:non-bip}\n\nIn this section we show that for every non-bipartite graph $H$ there are $H$-free graphs with low minrank over $\\mathbb{R}$, confirming Theorem~\\ref{thm:IntroNonBi}.\nWe start with the case where $H$ is an odd cycle, and since every non-bipartite graph contains an odd cycle the general result follows easily.\nThe proof is by an explicit construction from the following family of graphs.\n\n\\begin{definition}\\label{def:Kneser}\nFor integers $m \\leq s \\leq d$, the graph $\\Kneser{d}{s}{m}$ is defined as follows: the vertices are all the $s$-subsets of $[d]$, and two distinct sets $A,B$ are adjacent if $|A \\cap B| < m$.\n\\end{definition}\n\nThe minrank of such graphs over finite fields was recently studied in~\\cite{Haviv18} using tools from~\\cite{AlonBS91}.\nThe proof technique of~\\cite{Haviv18} can be used for the real field as well, as shown below.\n\n\\begin{proposition}\\label{prop:minrk_Kneser}\nFor every integers $m \\leq s \\leq d$,\n\\[{\\mathop{\\mathrm{minrk}}}_{\\mathbb{R}}(\\Kneser{d}{s}{m}) \\leq \\sum_{i=0}^{s-m}{d \\choose i}.\\]\n\\end{proposition}\n\n\\begin{proof}\nLet $f: \\{0,1\\}^d \\times \\{0,1\\}^d \\rightarrow \\mathbb{R}$ be the function defined by\n\\[ f(x,y) = \\prod_{j=m}^{s-1}{ \\Big ( \\sum_{i=1}^{d}{x_i y_i -j}\\Big )}\\]\nfor every $x,y \\in \\{0,1\\}^d$.\nExpanding $f$ as a linear combination of monomials, the relation $z^2 = z$ for $z \\in \\{0,1\\}$ implies that one can reduce to $1$ the exponent of each variable occuring in a monomial. It follows that $f$ can be represented as a multilinear polynomial in the $2d$ variables of $x$ and $y$. By combining terms involving the same monomial in the variables of $x$, one can write $f$ as\n\\[ f(x,y) = \\sum_{i=1}^{R}{g_i(x) h_i(y)} \\]\nfor an integer $R$ and functions $g_i, h_i : \\{0,1\\}^d \\rightarrow \\mathbb{R}$, $i \\in [R]$, such that the $g_i$'s are distinct multilinear monomials of total degree at most $s-m$ in $d$ variables. It follows that $R \\leq \\sum_{i=0}^{s-m}{d \\choose i}$.\n\nNow, let $M_1$ and $M_2$ be the $2^d \\times R$ matrices whose rows are indexed by $\\{0,1\\}^d$ and whose columns are indexed by $[R]$, defined by $(M_1)_{x,i} = g_i(x)$ and $(M_2)_{x,i} = h_i(x)$. Then, the matrix $M = M_1 \\cdot M_2^T$ has rank at most $R$ and for every $x,y \\in\\{0,1\\}^d$ it holds that $M_{x,y} = f(x,y)$.\n\nFinally, let $V$ be the vertex set of $\\Kneser{d}{s}{m}$, that is, the collection of all $s$-subsets of $[d]$, and identify every vertex $A \\in V$ with an indicator vector $c_A \\in \\{0,1\\}^d$ in the natural way. We claim that the matrix $M$ restricted to $V \\times V$ represents the graph $\\Kneser{d}{s}{m}$. Indeed, for every $A,B \\in V$ we have\n\\[M_{c_A, c_B} = f(c_A,c_B) = \\prod_{j=m}^{s-1}{ \\Big ({|A \\cap B| -j}\\Big )}.\\]\nHence, for every $A \\in V$ we have $|A|=s$ and thus $M_{c_A,c_A} \\neq 0$, whereas for every distinct non-adjacent $A,B \\in V$ we have $m \\leq |A \\cap B|\\leq s-1$ and thus $M_{c_A,c_B} = 0$. Since the restriction of $M$ to $V \\times V$ has rank at most $R$ it follows that ${\\mathop{\\mathrm{minrk}}}_\\mathbb{R}(\\Kneser{d}{s}{m}) \\leq R$, and we are done.\n\\end{proof}\n\nWe turn to identify graphs $\\Kneser{d}{s}{m}$ with no short odd cycles.\nFor this purpose, take an even integer $d$, $s = \\frac{d}{2}$, and $m = \\epsilon \\cdot d$ for a small constant $\\epsilon>0$.\nEvery path in these graphs is a sequence of $\\frac{d}{2}$-subsets of $[d]$ such that the intersection size of every two consecutive sets is small. This implies, for a sufficiently small $\\epsilon$, that the sets in the even positions of the path are almost disjoint from the first set, whereas the sets in the odd positions of the path share with it many elements, hence such a graph contains no short odd cycle.\nThis is shown formally in the following lemma.\n\n\\begin{lemma}\\label{lemma:cycle_K}\nLet $\\ell \\geq 3$ be an odd integer.\nFor every even integer $d$ and an integer $m \\leq \\frac{d}{2\\ell}$, the graph $\\Kneser{d}{\\frac{d}{2}}{m}$ contains no odd cycle of length at most $\\ell$.\n\\end{lemma}\n\n\\begin{proof}\nFix an odd integer $\\ell \\geq 3$, an even integer $d$, and an integer $m \\leq \\frac{d}{2\\ell}$.\nWe prove that for every odd integer $\\ell'$, such that $3 \\leq \\ell' \\leq \\ell$, the graph $\\Kneser{d}{\\frac{d}{2}}{m}$ contains no cycle of length $\\ell'$.\nFor such an $\\ell'$, let $A_1,A_2,\\ldots,A_{\\ell'}$ be a sequence of $\\ell'$ vertices in the graph, i.e., $\\frac{d}{2}$-subsets of $[d]$. Assuming that for every $i \\leq \\ell'-1$ the vertices $A_i$ and $A_{i+1}$ are adjacent in the graph, that is, $|A_i \\cap A_{i+1}| < m$, our goal is to show that $A_1$ and $A_{\\ell'}$ are not.\n\nTo this end, we argue that for every $i$, such that $0 \\leq i \\leq \\frac{\\ell'-1}{2}$, we have\n\\begin{eqnarray}\\label{eq:A_i}\n|A_1 \\cap A_{2i+1}| \\geq \\frac{d}{2}-2i \\cdot m.\n\\end{eqnarray}\nWe prove this claim by induction on $i$. The case $i=0$ follows immediately from $|A_1|=\\frac{d}{2}$.\nAssume that~\\eqref{eq:A_i} holds for $i-1$, that is, $|A_1 \\cap A_{2i-1}| \\geq \\frac{d}{2}-(2i-2) \\cdot m$.\nObserve that this implies that\n\\begin{eqnarray*}\n|A_1 \\cap A_{2i}| &=& | A_1 \\cap A_{2i} \\cap A_{2i-1} | + | A_1 \\cap A_{2i} \\cap \\overline{A_{2i-1}} | \\\\\n& \\leq & |A_{2i-1} \\cap A_{2i}| + | A_1 \\cap \\overline{A_{2i-1}} | \\\\\n& \\leq & m + |A_1|- | A_1 \\cap A_{2i-1} | \\\\\n& \\leq & m + \\frac{d}{2} - \\Big ( \\frac{d}{2}-(2i-2) \\cdot m \\Big ) = (2i-1) \\cdot m,\n\\end{eqnarray*}\nwhere in the second inequality we have used $|A_{2i-1} \\cap A_{2i}| < m$.\nWe proceed by proving~\\eqref{eq:A_i} for $i$. Observe that\n\\begin{eqnarray*}\n|A_1 \\cap A_{2i+1}| &=& |A_{2i+1}| - | \\overline{A_1} \\cap A_{2i+1} | \\\\\n&=& |A_{2i+1}| - | \\overline{A_1} \\cap A_{2i+1} \\cap A_{2i} | - | \\overline{A_1} \\cap A_{2i+1} \\cap \\overline{A_{2i}} | \\\\\n& \\geq & \\frac{d}{2} - m - | \\overline{A_1} \\cap \\overline{A_{2i}} |,\n\\end{eqnarray*}\nwhere we have used $|A_{2i} \\cap A_{2i+1}| < m$.\nNotice that\n\\[ |\\overline{A_1} \\cap \\overline{A_{2i}}| = d - |A_1 \\cup A_{2i}| = d-(|A_1|+|A_{2i}|-|A_1 \\cap A_{2i}|) = |A_1 \\cap A_{2i}|. \\]\nIt follows that\n\\[|A_1 \\cap A_{2i+1}| \\geq \\frac{d}{2}-m-|A_1 \\cap A_{2i}| \\geq \\frac{d}{2}-m-(2i-1)\\cdot m = \\frac{d}{2}-2i\\cdot m,\\]\ncompleting the proof of~\\eqref{eq:A_i}.\n\nFinally, applying~\\eqref{eq:A_i} to $i = \\frac{\\ell'-1}{2}$, using the assumption $m \\leq \\frac{d}{2\\ell}$, we get that\n\\[|A_1 \\cap A_{\\ell'}| \\geq \\frac{d}{2}-(\\ell'-1) \\cdot m = \\frac{d}{2}-\\ell' \\cdot m+m \\geq \\frac{d}{2}-\\ell \\cdot m +m \\geq m,\\]\nhence $A_1$ and $A_{\\ell'}$ are not adjacent in the graph $\\Kneser{d}{\\frac{d}{2}}{m}$. It thus follows that the graph contains no cycle of length $\\ell'$, as desired.\n\\end{proof}\n\nEquipped with Proposition~\\ref{prop:minrk_Kneser} and Lemma~\\ref{lemma:cycle_K}, we obtain the following.\n\n\\begin{theorem}\\label{thm:Cycles}\nFor every odd integer $\\ell \\geq 3$ there exists $\\delta = \\delta(\\ell) >0$ such that for every sufficiently large integer $n$, there exists an $n$-vertex graph $G$ with no odd cycle of length at most $\\ell$ such that\n\\[{\\mathop{\\mathrm{minrk}}}_{\\mathbb{R}}(G) \\leq n^{1-\\delta}.\\]\n\\end{theorem}\n\n\\begin{proof}\nFix an odd integer $\\ell \\geq 3$.\nFor an integer $d$ divisible by $2 \\ell$, consider the graph $G = \\Kneser{d}{\\frac{d}{2}}{m}$ where $m = \\frac{d}{2 \\ell}$.\nBy Lemma~\\ref{lemma:cycle_K}, $G$ contains no odd cycle of length at most $\\ell$.\nAs for the minrank, Proposition~\\ref{prop:minrk_Kneser} implies that\n\\[{\\mathop{\\mathrm{minrk}}}_{\\mathbb{R}}(G) \\leq \\sum_{i=0}^{d\/2-m}{d \\choose i} \\leq 2^{H(\\frac{1}{2}-\\frac{m}{d}) \\cdot d} = 2^{H(\\frac{1}{2}-\\frac{1}{2\\ell}) \\cdot d},\\]\nwhere $H$ stands for the binary entropy function.\nSince $G$ has $|V| = {d \\choose {d\/2}} = 2^{(1-o(1)) \\cdot d}$ vertices, for any $\\delta>0$ such that $H(\\frac{1}{2}-\\frac{1}{2\\ell}) < 1-\\delta$ we have ${\\mathop{\\mathrm{minrk}}}_{\\mathbb{R}}(G) \\leq |V|^{1-\\delta}$ for every sufficiently large integer $d$.\nThe proof is completed by considering, for every sufficiently large integer $n$, some $n$-vertex subgraph of the graph defined above, where $d$ is the smallest integer divisible by $2\\ell$ such that $n \\leq {d \\choose {d\/2}}$.\n\\end{proof}\n\n\nNow, Theorem~\\ref{thm:IntroNonBi} follows easily from Theorem~\\ref{thm:Cycles}.\n\n\\begin{proof}[ of Theorem~\\ref{thm:IntroNonBi}]\nLet $H$ be a non-bipartite graph. Then, for some odd integer $\\ell \\geq 3$, the cycle $C_\\ell$ is a subgraph of $H$.\nBy Theorem~\\ref{thm:Cycles}, there exists $\\delta > 0$ such that for every sufficiently large integer $n$, there exists an $n$-vertex $C_\\ell$-free graph $G$ satisfying ${\\mathop{\\mathrm{minrk}}}_{\\mathbb{R}}(G) \\leq n^{1-\\delta}$.\nSince every $C_\\ell$-free graph is also $H$-free, the result follows.\n\\end{proof}\n\n\\begin{remark}\nAs mentioned in the introduction, Theorem~\\ref{thm:IntroNonBi} implies a lower bound on $g(n,H,\\mathbb{R})$ for every non-bipartite graph $H$ (see Corollary~\\ref{cor:IntroNonBi}).\nWe note that upper bounds on certain Ramsey numbers can be used to derive upper bounds on $g(n,H,\\Fset)$ for a general field $\\Fset$.\nFor example, it was shown in~\\cite{ErdosFRS78}\nthat for every $\\ell \\geq 3$, every $n$-vertex $C_\\ell$-free graph has an independent set of size $\\Omega( n^{1-1\/k} )$ for $k = \\lceil \\frac{\\ell}{2} \\rceil$ (see~\\cite{CaroLRZ00,Sudakov02} for slight improvements).\nBy repeatedly omitting such independent sets it follows that the chromatic number of such a graph is $O(n^{1\/k})$.\nThis implies that every $n$-vertex graph $G$ whose complement is $C_\\ell$-free satisfies ${\\mathop{\\mathrm{minrk}}}_{\\Fset}(G) \\leq \\chi(\\overline{G}) \\leq O(n^{1\/k})$, hence $g(n,C_\\ell,\\Fset) \\leq O(n^{1\/k})$.\n\\end{remark}\n\n\n\\section*{Acknowledgements}\nWe are grateful to Alexander Golovnev and Pavel Pudl\\'ak for useful discussions and to the anonymous referees for their valuable suggestions.\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nTheoretical calculations have predicted that dark matter density distribution would be altered by a massive black hole \\citep{Gondolo,Merritt2,Gnedin,Merritt,Sadeghian,Nampalliwar}. The conservation of angular momentum and energy would naturally force dark matter to form a dense spike (i.e. a cusp-like density profile) \\citep{Gondolo,Merritt2,Gnedin,Sadeghian}. Generally speaking, dark matter density around a black hole would eventually follow a simple power-law form: $\\rho_{\\rm DM} \\propto r^{-\\gamma}$, where $r$ is the radial distance from the black hole and $\\gamma$ is the spike index. The value of $\\gamma$ is model-dependent, which can range from $\\gamma=1.5$ to $\\gamma=2.5$ \\citep{Merritt2,Gnedin,Sadeghian,Fields,Lacroix}. Since the dark matter density profile is a singular form in $r$, the dark matter density near black hole would be very high (i.e. a dense spike).\n\nBased on this theoretical prediction, if dark matter can self-annihilate to give gamma-ray photons, one can expect that the annihilation gamma-ray signals would be greatly enhanced because the annihilation rate is proportional to $\\rho_{\\rm DM}^2$. A lot of attention has been paid specifically for the dark matter density spike surrounding galactic supermassive black holes \\citep{Gondolo,Gnedin,Fields,Bertone,Shapiro} and intermediate-mass black holes \\citep{Lacroix,Chan}. Various studies have been done to examine the possible enhanced gamma-ray signals, especially near the supermassive black hole in the Milky Way galaxy \\citep{Fields,Shapiro}. However, no promising signals have been observed to verify the theoretical prediction \\citep{Fields}. However, it does not mean that the dark matter density spike model is wrong. The negative result in gamma-ray observations could be due to the following reasons: 1. the rest mass of dark matter particles is very large, 2. the annihilation cross section is very small, or 3. dark matter particles do not self-annihilate.\n\nOn the other hand, observations and studies of the two closest black hole low-mass X-ray binaries (BH-LMXBs), A0620-00 and XTE J1118+480, have provided very precise measurements for many important physical parameters, including the orbital period $P$, observed radial velocity of the companion star $K$, orbital inclination $i$, black hole mass $M_{\\rm BH}$, and the mass of the companion star $m$ (or the mass ratio $q=m\/M_{\\rm BH}$) \\citep{McClintock,Neilsen,Cantrell,Grunsven,Khargharia,Zurita,Cherepashchuk} (see Table 1 for the measured values). The companion star is orbiting the black hole in a nearly circular orbit for each binary. In particular, observations have revealed abnormally fast orbital decays in the two BH-LMXBs: $\\dot{P}=-0.60 \\pm 0.08$ ms yr$^{-1}$ for A0620-00 and $\\dot{P}=-1.90 \\pm 0.57$ ms yr$^{-1}$ for XTE J1118+480 \\citep{Gonzalez}. These decays are two orders of magnitude larger than the one expected with gravitational wave radiation \\citep{Chen,Chen2}. Standard theories only predict $\\dot{P} \\sim -0.02$ ms yr$^{-1}$ \\citep{Gonzalez3}. Two major proposals have been suggested recently to account for the fast orbital decay. The first one is related to the magnetic braking of the companion star. If the surface magnetic field of the companion star is very strong (e.g. $\\ge 10^4$ G), the coupling between the magnetic field and the winds from the companion star driven by X-ray irradiation from the black hole would decrease the orbital period through tidal torques \\citep{Chen,Justham}. However, this model requires a significant mass loss from the binary system, which has not been observed \\citep{Gonzalez}. The second proposal suggests that the tidal torque between the circumbinary disk and the binary can efficiently extract the orbital angular momentum from the binary to cause the orbital decay \\citep{Chen}. Nevertheless, simulations show that the predicted mass transfer rate and the circumbinary disk mass are much greater than the inferred values from observations \\citep{Chen}. Although a few recent studies suggesting the resonant interaction between the binary and a surrounding circumbinary disk could produce the observed orbital period decays \\citep{Chen2,Xu}, the calculated initial mass and effective temperature of the companion stars somewhat do not match the observations \\citep{Chen2}. Therefore, it is still a mystery for the abnormally fast orbital decays in the two BH-LMXBs.\n\nBeside the annihilation rate, the dynamical friction due to dark matter density spike would also be very large. If a star is moving inside a collisionless dark matter background, the star would exert a gravitational force to pull the dark matter particles towards it. Then a concentration of the dark matter particles would locate behind the star and exert a collective gravitational force on the star. This collective gravitational force would slow down the star and the resulting effect is called dynamical friction. The idea of dynamical friction was proposed by Chandrasekhar more than 70 years ago \\citep{Chandrasekhar}. However, very surprisingly, the dynamical frictional effect in BH-LMXBs has not been seriously examined in previous studies. Most of the related studies are focusing on the compact binary systems \\citep{Antonini,Eda,Pani,Yue,Dai,Li,Becker,Speeney,Kavanagh}. Also, no previous study has realized the possible observable consequence of dark matter density spike surrounding a stellar-mass black hole. In this letter, we will discuss the observed fast orbital decays in the two closest BH-LMXBs with the idea of dynamical friction of dark matter density spike.\n\n\\section{The dynamical friction model}\nConsider a typical BH-LMXB system. The low-mass companion star with mass $m<1M_{\\odot}$ is orbiting a central black hole with mass $M_{\\rm BH}$ much greater than the stellar mass. The central black hole is almost stationary at the center of the system. If a dark matter density spike is surrounding the central black hole, the companion star would experience the dynamical friction exerted by dark matter. The energy loss due to dynamical friction would decrease the orbital period $P$ of the companion star.\n\n The energy loss due to dynamical friction is given by \\citep{Chandrasekhar,Yue}:\n\\begin{equation}\n\\dot{E}=- \\frac{4\\pi G^2\\mu^2 \\rho_{\\rm DM} \\xi(\\sigma) \\ln \\Lambda}{v},\n\\end{equation}\nwhere $\\mu$ is the reduced mass of the BH-LMXB, $\\ln \\Lambda \\approx \\ln (\\sqrt{M_{\\rm BH}\/m})$ is the Coulomb Logarithm \\citep{Kavanagh}, $v$ is the orbital velocity, and $\\xi(\\sigma)$ is a numerical factor which depends on the distribution function and the velocity dispersion $\\sigma$ of dark matter. If we assume a Maxwell's distribution for dark matter and take $\\sigma=200$ km\/s, we will have $\\xi(\\sigma) \\sim 0.9$. However, as the information about dark matter is uncertain, we simply assume $\\xi(\\sigma)=1$. The orbital velocity can be determined by the observed radial velocity $K$ and the orbital inclination $i$: $v=K\/\\sin i$.\n\nUsing the Keplerian relation $P^2=4\\pi^2a^3\/G(M_{\\rm BH}+m)$ with $a$ being the radius of the orbital motion, we can write\n\\begin{equation}\n\\frac{\\dot{P}}{P}=\\frac{3 \\dot{a}}{2a}=-\\frac{3 \\dot{E}}{2E},\n\\end{equation}\nwhere $E=-GM_{\\rm BH}m\/2a$ is the total mechanical energy. Therefore, the orbital decay rate can be expressed in terms of the observed parameter set \\{ $q$, $K$, $i$, $P$, $M_{\\rm BH}$ \\} by:\n\\begin{equation}\n\\dot{P}=- \\frac{12\\pi qGP \\ln \\Lambda}{(1+q)^2(K\/\\sin i)} \\left[\\frac{GM_{\\rm BH}(1+q)P^2}{4\\pi^2} \\right]^{1\/3} \\rho_{\\rm DM},\n\\end{equation}\nwhere $q=m\/M_{\\rm BH}$ is the mass ratio.\n\nFollowing the dark matter density spike theory, dark matter would re-distribute to form a density spike around the black hole in the BH-LMXB within the spike radius $r_{\\rm sp}$. We follow the standard assumption $r_{\\rm sp}=0.2r_{\\rm in}$ used in many other studies \\citep{Fields,Eda}, where $r_{\\rm in}$ is the radius of black hole's sphere of influence. Outside $r_{\\rm sp}$, the dark matter density would follow the local dark matter density of their respective positions in the Milky Way. The dark matter density around the black hole with mass $M_{\\rm BH}$ can be modeled by the following profile \\citep{Lacroix}:\n\\begin{equation}\n\\rho_{\\rm DM}=\\left\\{\n\\begin{array}{ll}\n0 & {\\rm for }\\,\\,\\, r\\le 2R_s \\\\\n\\rho_0 \\left(\\frac{r}{r_{\\rm sp}} \\right)^{-\\gamma} & {\\rm for }\\,\\,\\, 2R_s r_{\\rm sp} \\\\\n\\end{array}\n\\right.\n\\end{equation}\nwhere $R_s=2GM_{\\rm BH}\/c^2$, and $\\rho_0$ is the local dark matter density. When the distance from the black hole is larger than the spike radius $r_{\\rm sp}$, we assume that the dark matter density would follow back to the local dark matter density. By taking the reference value at the solar position $\\rho_{\\odot}=0.33 \\pm 0.03$ GeV cm$^{-3}$ \\citep{Ablimit} and following the Navarro-Frenk-White dark matter density profile \\citep{Navarro}, the local dark matter densities of A0620-00 and XTE J1118+480 can be determined by their respective positions \\citep{Gonzalez2}: $\\rho_0=0.29 \\pm 0.03$ GeV cm$^{-3}$ for A0620-00 and $\\rho_0=0.34 \\pm 0.03$ GeV cm$^{-3}$ for XTE J1118+480.\n\nThe radius of influence can be determined by \\citep{Merritt,Merritt2}:\n\\begin{equation}\nM_{\\rm DM}(r\\le r_{\\rm in})=\\int_0^{r_{\\rm in}}4 \\pi r^2\\rho_{\\rm DM}dr=2M_{\\rm BH}.\n\\end{equation}\nTherefore, the spike radius $r_{\\rm sp}$ is also a function of $M_{\\rm BH}$. Note that the spike density profile assumed here is not an ad hoc parametrization, but follows from theoretical calculations \\citep{Gondolo,Sadeghian}. It is mainly determined by the black hole mass.\n\nThe spike index $\\gamma$ is the only free parameter in this analysis. For a spike of collisionless dark matter that forms about an adiabatically growing black hole, we have $\\gamma=2.25-2.5$ \\citep{Gondolo,Fields}. However, if gravitational scattering of stars is important, the stellar heating effect would drive the value of $\\gamma$ down to a minimum value $\\gamma=1.5$ \\citep{Merritt2,Gnedin}. Such a change in the spike index depends on the heating time scale, which is given by \\citep{Merritt2}:\n\\begin{eqnarray}\nt_{\\rm heat}&=&\\frac{\\sqrt{3\\pi} \\Gamma(0.5)M_{\\rm BH}}{18m \\ln \\Lambda} \\left(\\frac{GM_{\\rm BH}}{r_{\\rm in}^3} \\right)^{-1\/2}=1.2 \\times 10^{15}~{\\rm s} \\nonumber\\\\\n&&\\times \\left(\\frac{M_{\\rm BH}}{5M_{\\odot}} \\right)^{1\/2}\\left(\\frac{r_{\\rm in}}{5~\\rm pc} \\right)^{3\/2} \\left(\\frac{m}{M_{\\odot}} \\right)^{-1} \\left(\\frac{\\ln \\Lambda}{3} \\right)^{-1},\n\\end{eqnarray}\nHere, a constant stellar density and an initial dark matter spike index $\\gamma=2.5$ are assumed \\citep{Merritt2}. Generally speaking, for the age of the black hole $t_{\\rm BH} \\ge t_{\\rm heat}$, the spike index would approach the minimum value $\\gamma=1.5$ more likely.\n\n\\section{Results}\n\\subsection{Constraints on the spike index}\nThe analytic formula gives $\\dot{P}$ in terms of the precisely measured parameters \\{ $q$, $K$, $i$, $P$, $M_{\\rm BH}$ \\}. We find that the typical values of these parameters \\{ $0.05$, $500$ km\/s, $45^{\\circ}$, $1$ day, $5M_{\\odot}$ \\} in BH-LMXBs can give $\\dot{P} \\sim -1$ ms yr$^{-1}$ for a typical dark matter spike density $\\rho_{\\rm DM} \\sim 10^{-13}$ g cm$^{-3}$. For our two target BH-LMXBs, we put the corresponding measured parameters and the observed orbital decay rates to constrain the dark matter densities at the respective companion stellar orbits (with radius $a$): $\\rho_{\\rm DM}(a) \\approx 7.65^{+1.62}_{-1.43} \\times 10^{-13}$ g cm$^{-3}$ (A0620-00) and $\\rho_{\\rm DM}(a) \\approx 1.60^{+1.51}_{-0.73} \\times 10^{-11}$ g cm$^{-3}$ (XTE J1118+480). Following our dark matter density spike model and involving the uncertainties of the measured parameters, we get $\\gamma=1.71^{+0.01}_{-0.02}$ for A0620-00 and $\\gamma=1.85^{+0.04}_{-0.04}$ for XTE J1118+480 (see Fig.~1 for the general relation between $\\dot{P}$ and $\\gamma$).\n\nAs mentioned above, theoretical predictions give $1.5 \\le \\gamma \\le 2.5$ \\citep{Merritt2,Gnedin,Fields,Lacroix}. If the effects of baryons or stellar heating are important, the spike index might be close to the smallest extreme value $\\gamma=1.5$ \\citep{Merritt2,Gnedin,Fields}. In fact, this stellar heating effect is due to the dynamical friction between stars and dark matter. Therefore, in a BH-LMXB, the continuous gravitational scattering between the companion star and dark matter might provide the similar stellar heating effect to reduce the spike index to a smaller value. Nevertheless, the case for BH-LMXB is somewhat different from the stellar heating scenario discussed in \\citet{Merritt2,Gnedin}. There is only one companion object in a BH-LMXB while many stars are involved in the stellar heating scenario. However, recent simulations show that the dynamical friction of the companion object with a large mass ratio $q$ would increase the kinetic energy of the dark matter particles in the halo and somewhat decrease the dark matter density \\citep{Kavanagh}, which apparently reduces the spike index. Although this is not identical to the stellar heating scenario, both processes involve the dynamical friction to re-distribute the dark matter density.\n\n Using the stellar heating scenario as an analogy, we expect that the spike index might be smaller if $t_{\\rm BH} \\ge t_{\\rm heat}$. Using Eq.~(6), the heating time scales for A0620-00 and XTE J1118+480 are $t_{\\rm heat}=3.5\\times 10^{15}$ s and $t_{\\rm heat}=6.1\\times 10^{15}$ s respectively. Although we do not know the ages of the black holes in the BH-LMXBs, we can assume $t_{\\rm BH} \\le P\/\\dot{P}$. It is because if $t_{\\rm BH}>P\/\\dot{P}$, it should be highly improbable for us to observe A0620-00 and XTE J1118+480 now as both systems would have collapsed very likely within the cosmological age of 13.7 Gyr ($4.3\\times 10^{17}$ s), unless a significant change of mass transfer rate has occurred. If we follow this assumption, we can find that the upper limit of $t_{\\rm BH}$ for the A0620-00 black hole is the same order of magnitude as the heating time scale while the upper limit of $t_{\\rm BH}$ for the XTE J1118+480 black hole is about 20 times smaller than the heating time scale (see Table 2). This may explain why the spike index for A0620-00 is smaller. Therefore, our results reveal a consistent picture for the dark matter spike model and provide a very good explanation for the abnormally fast orbital decay in the two closest BH-LMXBs. Note that our major conclusion still holds even if the stellar heating scenario is not a good analogy.\n\n\\subsection{The effect of dark matter annihilation}\nWe did not assume any dark matter annihilation in the above discussion. If dark matter annihilation rate is large enough, the central dark matter density would approach the constant saturation density $\\rho_{\\rm sat}=m_{\\rm DM}\/\\langle \\sigma v \\rangle t_{\\rm BH}$ \\citep{Lacroix} when $\\rho_0(r\/r_{\\rm in})^{-\\gamma}> \\rho_{\\rm sat}$, where $m_{\\rm DM}$ is the mass of a dark matter particle and $\\langle \\sigma v \\rangle$ is the annihilation cross section. If the orbital decays originate from the dynamical friction of dark matter with the saturation density (i.e. the orbital radius is smaller than the saturation radius), we can determine the upper limits of dark matter mass for this particular scenario. Taking the thermal annihilation cross section $\\langle \\sigma v \\rangle=2.2\\times 10^{-26}$ cm$^3$\/s predicted by standard cosmology \\citep{Steigman} and the upper limits of $t_{\\rm BH}$, we can get $m_{\\rm DM} \\le 14$ GeV for A0620-00 and $m_{\\rm DM} \\le 48$ GeV for XTE J1118+480 if the companion stars are moving in the dark matter saturation density region. In other words, if $m_{\\rm DM}>48$ GeV, the companion stars in both systems would be orbiting the corresponding black hole in the dark matter density spike region. Since many recent stringent constraints of thermal annihilating dark matter indicate $m_{\\rm DM} \\ge 100$ GeV \\citep{Ackermann,Chan2,Abazajian,Regis}, the dark matter density would not be saturated at the orbital positions in A0620-00 and XTE J1118+480.\n\n\\section{Discussion}\nThe existence of dark matter density spike surrounding a black hole has been suggested for more than two decades. However, no smoking-gun evidence has been obtained from observations. Here, we show that the effect of dynamical friction due to dark matter density spike can satisfactorily explain the fast orbital decay in the two closest BH-LMXBs. The resultant spike index is $\\gamma=1.7-1.8$, which is close to the value predicted by the stellar heating model ($\\gamma=1.5$) \\citep{Merritt2,Gnedin}. Although the BH-LMXBs considered here are not identical to the stellar heating scenario discussed in \\citet{Gnedin}, recent simulations of the compact-object inspirals show that the motion of the companion object would affect the distribution of the dark matter density spike surrounding an intermediate-mass black hole, especially for the mass ratio $q>10^{-3}$ \\citep{Kavanagh}. Therefore, we may also see similar results of the stellar heating effect in the BH-LMXBs. Note that although the dark matter density is changing in time during re-distribution, the dynamical friction expression used in Eq.~(1) is still applicable because the change is very slow in time \\citep{Kavanagh}. An overall consistent picture can be described as follows. When the black hole in a BH-LMXB is formed, the surrounding dark matter would be re-distributed to form a density spike (probably with an initial spike index $\\gamma \\approx 2-2.5$) \\citep{Gondolo}. However, the dynamical friction between dark matter and the companion star eventually help re-distribute the dark matter density spike again to reduce the spike index to approach $\\gamma=1.7-1.8$. The orbital period is also decreasing with a fast rate $\\sim 1$ ms yr$^{-1}$ due to dynamical friction. If the age of the black hole is larger than the heating time scale, the final spike index may change to a smaller value.\n\nWe can get very small uncertainties in $\\gamma$ because the uncertainties of the measured parameters are very small, especially for A0620-00. The uncertain factor $\\xi(\\sigma)$ would only change the resulting spike index slightly. Generally speaking, our results may suggest a possible evidence of the existence of dark matter density spike surrounding a black hole. It also suggests that a dark matter density spike might exist around a stellar-mass black hole ($M_{\\rm BH} \\sim 1-10M_{\\odot}$), but not only around a supermassive black hole \\citep{Gondolo,Merritt2,Gnedin,Lacroix2} or an intermediate-mass black hole \\citep{Lacroix,Dai,Li} as suggested in the past literature. Since no previous study has focused on the case of dark matter density spike around a stellar-mass black hole, the effect of dark matter dynamical friction has also been neglected. In fact, one recent study has proposed that the electron excess detected by the DAMPE experiment might originate from the annihilating dark matter density spike in A0620-00 \\citep{Chan3}. Therefore, analyzing the effect of dark matter dynamical friction in BH-LMXBs would open a new independent way for investigating the dark matter distribution near stellar-mass black holes.\n\nMoreover, if dark matter annihilation effect is important so that the central dark matter density becomes saturated, we can calculate the upper limits of dark matter mass for this particular scenario. Since the calculated upper limits of thermal annihilating dark matter mass $m_{\\rm DM}$ are generally smaller than the lower limits constrained from recent multi-wavelength studies, the companion stars should be orbiting inside the dark matter density spike rather than the saturation density. In other words, the effect of annihilation is not important in constraining the spike index.\n\nIn fact, analyzing the effect of dynamical friction of dark matter density spike in a binary system is not a new idea. Nevertheless, most of the related studies have focused on the binaries of the compact objects (e.g. black hole binaries) rather than the BH-LMXB systems \\citep{Eda,Pani,Yue,Dai,Li,Becker,Speeney,Kavanagh}. In compact binaries, both gravitational radiation and dynamical friction of dark matter are significant. Therefore, gravitational wave detection might be required to reveal the nature of dark matter, which might contribute extra uncertainties in the constrained parameters. Since optical and X-ray observations can give very precise measurements for most of the important physical parameters in BH-LMXBs, we anticipate that analyzing BH-LMXBs can better reveal the nature of the dark matter density spike surrounding a black hole. There are at least 18 black hole X-ray binaries in our Galaxy \\citep{Chen}, which can give rich information to constrain the nature of dark matter. For example, one nearby black hole X-ray binary Nova Muscae 1991 also shows an abnormally fast orbital decay $\\dot{P}=-20.7 \\pm 12.7$ ms yr$^{-1}$, although the uncertainty is quite large \\citep{Gonzalez3}. Future high quality measurements may be helpful to further confirm the existence of dark matter density spike in these black hole X-ray binaries. This kind of analyses would open an entirely new window for observations and theoretical studies to investigate dark matter astrophysics \\citep{Bertone2}.\n\n\\begin{table}\n\\caption{The measured parameters of A0620-00 and XTE J1118+480.}\n\\begin{tabular}{ |l|l|l|}\n \\hline\\hline\n & A0620-00 & XTE J1118+480 \\\\\n \\hline\n $M_{\\rm BH}$ & $5.86 \\pm 0.24 M_{\\odot}$ \\citep{Grunsven} & $7.46^{+0.34}_{-0.69}M_{\\odot}$ \\citep{Gonzalez} \\\\\n $q$ & $0.060 \\pm 0.004$ \\citep{Grunsven} & $0.024 \\pm 0.009$ \\citep{Khargharia} \\\\\n $K$ (km\/s) & $435.4 \\pm 0.5$ \\citep{Neilsen} & $708.8 \\pm 1.4$ \\citep{Khargharia} \\\\\n $i$ & $54^{\\circ}.1 \\pm 1^{\\circ}.1$ \\citep{Grunsven} & $73^{\\circ}.5 \\pm 5^{\\circ}.5$ \\citep{Khargharia} \\\\\n $P$ (day) & $0.32301415(7)$ \\citep{Gonzalez} & $0.16993404(5)$ \\citep{Gonzalez} \\\\\n $\\dot{P}$ (ms yr$^{-1}$) & $-0.60 \\pm 0.08$ \\citep{Gonzalez} & $-1.90 \\pm 0.57$ \\citep{Gonzalez} \\\\\n $d$ (kpc) & $1.06 \\pm 0.12$ \\citep{Gonzalez2} & $1.70 \\pm 0.10$ \\citep{Gonzalez2} \\\\\n \\hline\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}\n\\caption{ The orbital radius $a$, the calculated dark matter density at $a$, the spike index $\\gamma$, the radius of influence $r_{\\rm in}$, the heating time scale $t_{\\rm heat}$, and the upper limit of the black hole age $t_{\\rm BH}$ for each BH-LMXB based on the dark matter density spike model.}\n\\begin{tabular}{ |l|l|l|}\n \\hline\\hline\n & A0620-00 & XTE J1118+480 \\\\\n\\hline\n$a$ (AU) & $0.0169^{+0.0003}_{-0.0002}$ & $0.0118^{+0.0002}_{-0.0004}$ \\\\\n$\\rho_{\\rm DM}(a)$ (g cm$^{-3}$) & $7.65^{+1.62}_{-1.43} \\times 10^{-13}$ & $1.60^{+1.51}_{-0.73} \\times 10^{-11}$ \\\\\n$\\gamma$ & $1.71^{+0.01}_{-0.02}$ & $1.85^{+0.04}_{-0.04}$ \\\\\n$r_{\\rm in}$ (pc) & $5.41^{+0.10}_{-0.09}$ & $5.34^{+0.02}_{-0.06}$ \\\\\n$t_{\\rm heat}$ (s) & $3.5 \\times 10^{15}$ & $6.1 \\times 10^{15}$ \\\\\n$t_{\\rm BH}$ (s) & $\\le 1.7\\times 10^{15}$ & $\\le 3.5 \\times 10^{14}$ \\\\\n \\hline\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}\n\\vskip 10mm\n\\includegraphics[width=140mm]{power2.eps}\n\\caption{The black and red solid lines indicate the relation between $\\gamma$ and $\\dot{P}$ for A0620-00 and XTE J1118+480 respectively. The horizontal dashed lines and dotted lines represent the mean values and the $1\\sigma$ limits of the observed orbital decay rates (black: A0620-00; red: XTE J1118+480).}\n\\label{Fig1}\n\\vskip 5mm\n\\end{figure}\n\n\\section{Acknowledgements}\nWe thank the anonymous referees for useful comments. The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. EdUHK 18300922).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMost of the energy released in the nuclear fission process appears in the kinetic energy of the fission fragments.\n A first order estimate of the magnitude of the total kinetic energy release is that of the Coulomb energy of the fragments at scission, i.e., \n\\begin{equation}\nV_{Coul}=\\frac{Z_{1}Z_{2}e^{2}}{r_{1}+r_{2}}\n\\end{equation}\nwhere Z$_{n}$, r$_{n}$ are the atomic numbers and radii of fragments 1 and 2. Recognizing that the fragments are deformed at scission, one can re-write equation 1 as \n\\begin{equation}\nTKE=\\frac{Z_{1}Z_{2}e^{2}}{1.9(A_{1}^{1\/3}+A_{2}^{1\/3})}\n\\end{equation}\nwhere the coefficient 1.9 (instead of the usual 1.2 - 1.3) represents the fragment deformation. For symmetric fission, Z$_{1}$=Z$_{2}$=Z\/2 and A$_{1}$ =A$_{2}$=A\/2, then we have\n\\begin{equation}\nTKE = (0.119)\\frac{Z^{2}}{A^{1\/3}}MeV\n\\end{equation}\nTrajectory calculations \\cite{raja} for alpha particle emission in fission have shown that the fission\n fragments are in motion at scission with a pre-scission kinetic energy of 7.3 MeV and an additive term representing this motion is needed. \n Thus we have the ``Viola systematics\" \\cite{vic} that say \n\\begin{equation}\nTKE = (0.1189\\pm 0.0011)\\frac{Z^{2}}{A^{1\/3}}+7.3(\\pm 1.5)MeV\n\\end{equation}\n\n\nThe deformed scission point fragments will contract to their equilibrium deformations and the energy\n stored in deformation will be converted into internal excitation energy. Thus we can define a related quantity, the total excitation energy , TXE, in fission as\n\\begin{equation}\nTXE=Q-TKE\n\\end{equation}where Q is the mass-energy release. One quickly realizes that these quantities depend\n on the mass split in fission which in turn, at low excitation energies, may reflect the fragment nuclear structure. The TXE is the starting point for calculations of the prompt neutron and gamma emission in fission, the yields of beta emitting fission fragments, reactor anti-neutrino spectra, etc. As such, it is a fundamental property of all fissioning systems and sadly not very well known.\n\nAs a practical matter, one needs to know the dependence of the TKE and TXE on neutron \nenergy for the neutron induced fission of technologically important actinide fissioning systems\n like $^{233}$U(n,f),$^{235}$U(n,f), and $^{239}$Pu(n,f). The first question we might pose is\n whether the TKE should depend on the excitation energy of the fissioning system. \n Does the energy brought in by an incident neutron in neutron induced fission appear\n in the fragment excitation energy or does it appear in the total kinetic energy? \n In a variety of experiments, one finds that increasing the excitation energy of the \nfissioning system does not lead to significant increases in the TKE of the fission \nfragments or changes in the fragment separation at scission. \\cite{VH}. However,\n there may be more subtle effects that render this statement false in some circumstances. \n For example, we expect, on the basis of the Coulomb energy systematics given above, \n that the TKE will be proportional to changes in the fission mass splits which in turn can depend on the excitation energy.\n \nFor the technologically important reaction $^{235}$U(n,f), Madland \\cite{dave} summarizes the known data \\cite{straede, meadows, muller}with the following equations\n\\begin{equation}\n\\left\\langle T_{f}^{tot}\\right\\rangle =\\left( 170.93\\pm 0.07\\right) -\\left(\n0.1544\\pm 0.02\\right) E_{n}(MeV)\n\\end{equation}\n\\begin{equation}\n\\left\\langle T_{p}^{tot}\\right\\rangle =\\left( 169.13\\pm 0.07\\right) -\\left(\n0.2660\\pm 0.02\\right) E_{n}(MeV) \n\\end{equation}\nwhere E$_{n}$ is the energy of the incident neutron and T$_{f}^{tot}$and T$_{p}^{tot}$ are the\n average total fission fragment kinetic energy (before neutron emission) and the average fission\n product kinetic energy after neutron emission, respectively. These quantities are related by the relation\n\\begin{equation}\n\\left\\langle T_{p}^{tot}(E_{n}\\right\\rangle =\\left\\langle\nT_{f}^{tot}(E_{n}\\right\\rangle \\left[ 1-\\frac{\\overline{\\nu _{p}}(E_{n})}{2A}%\n\\left( \\frac{\\left\\langle A_{H}\\right\\rangle }{\\left\\langle\nA_{L}\\right\\rangle }+\\frac{\\left\\langle A_{L}\\right\\rangle }{\\left\\langle\nA_{H}\\right\\rangle }\\right) \\right] \n\\end{equation}\nThese data show a modest decrease in TKE with increasing excitation energy for the neutron\n energy interval E$_{n}$ =1-9 MeV. There is no clearly identified changes in the TKE values\n near the second chance fission threshold, a feature that is important in semi-empirical models\n of fission such as represented by the GEF code.\\cite{khs} \n\nIn this paper, we report the results of measuring the total kinetic energy release in the neutron \ninduced fission of $^{235}$U for neutron energies E$_{n}$ = 3.2 -50 MeV. The method used for the \nmeasurement is the 2E method, i.e., measurement of the kinetic energies of the two coincident fission \nproducts using semiconductor detectors. The time of flight of the neutrons inducing fission was measured, \nallowing deduction of their energy. The details of the experiment are discussed in Section II while the \nexperimental results and a comparison of the results \nwith various models and theories is made in Section III with conclusions being summarized in Section IV.\n\n\\section{Experimental}\n\nThis experiment was carried out at the Weapons Neutron Research Facility (WNR) at the Los Alamos Neutron Science Center (LANSCE) at the Los Alamos National Laboratory \\cite{Lis, Liso}. ``White spectrum\" neutron beams were generated from an unmoderated tungsten spallation source using the 800 MeV proton beam from the LANSCE linac. The experiment was located on the 15R beam line (15$^{\\circ}$-right with respect to the proton beam). The calculated (MCNPX) ``white spectrum \" at the target position is shown in figure 1. \\cite{snow} The proton beam is pulsed allowing one to measure the time of flight (energy) of the neutrons arriving at the experimental area.\n\nA schematic diagram of the experimental apparatus is shown in figure 2. The neutron beam was collimated to a 1 cm diameter at the entrance to the experimental area. At the entrance to the scattering chamber, the beam diameter was measured to be 1.3 cm. A fission ionization chamber \\cite{steve} was used to continuously monitor the absolute neutron beam intensities. The $^{235}$U target and the Si PIN diode fission detectors were housed in an evacuated, thin-walled aluminum scattering chamber. The scattering chamber was located $\\sim$ 3.1 m from the collimator, and $\\sim$ 11 m from the neutron beam dump. The center of the scattering chamber was located 16.46 m from the production target.\n\nThe $^{235}$U target consisted of a deposit of $^{235}$UF$_{4}$ on a thin C backing. The thickness of the $^{235}$U was 175.5 $\\mu$g $^{235}$U\/cm$^{2}$ while the backing thickness was 100 $\\mu$g\/cm$^{2}$. The isotopic purity of the $^{235}$U was 99.91 $\\%$. The target was tilted at 50 $^{\\circ}$ with respect to the incident beam.\n\nFission fragments were detected by two arrays of Si PIN photodiodes (Hamamatsu S3590-09) arranged on opposite sides of the beam. The area of the individual PIN diodes was 1 cm$^{2}$. The distance of the detectors from the target varied with angle from 2.60 cm to 4.12 cm. The coincident detector pairs were at approximately 45, 60, 90, 115, and 135 $^{\\circ}$. The alpha particle energy resolution of the diodes was 18 keV for the 5475 keV line of $^{241}$Am. \n\nThe time of flight of each interacting neutron was measured using a timing pulse from a Si PIN diode and the accelerator RF signal. Absolute calibrations of this time scale were obtained from the photofission peak in the fission spectra and the known flight path geometry. \n\nThe energy calibration of the fission detectors was done with a $^{252}$Cf source. We have used the traditional Schmitt method \\cite{hal}. Some have criticized this method especially for PIN diodes. However with our limited selection of detectors, we were unable to apply the methods of \\cite{moz} to achieve a robust substitute for the Schmitt method.\n\nThe measured fragment energies have be to be corrected for energy loss in the $^{235}$UF$_4$ deposit and the C backing foil. This correction was done by scaling the energy loss correction given by the Northcliffe-Schilling energy loss tables \\cite{NS} to a measured mean energy loss of collimated beams of light and heavy $^{252}$Cf fission fragments in 100 $\\mu$ g\/cm$^{2}$ C foils. The scaling factor that was used was a linear function of mass using the average loss of the heavy and light fission fragments as anchor points. The correction factors at the anchor points were 1.24 and 1.45 for the heavy and light fragments, respectively. Similar factors were obtained if the SRIM code \\cite{srim} was used to calculate dE\/dx. These large deviation factors from measured to calculated fission fragment stopping powers have been observed in the past \\cite{Knyazheva}, and represent the largest systematical uncertainty in the determination of the kinetic energies. \n\n\n\\section{Results and Discussion}\n\nThe measured average post-neutron emission fission product total kinetic energy release for the $^{235}$U(n,f) reaction(Table 1) is shown in Figure 3 along with other data and predictions\n\\cite{gunn, kapoor, stevenson}. The evaluated post-neutron emission data from Madlund \\cite{dave} are shown as a dashed line while the individual pre-neutron emission measurements of \\cite{muller} are shown as points. The point at E$_{n}$ =14 MeV is the average of \\cite{gunn} and \\cite{stevenson}. The slope of the measured TKE release (this work) is in rough agreement with the previous measurements \\cite{dave} at lower energies. Also shown are the predictions of the GEF model \\cite{khs}. GEF is a semi-empirical model of fission that provides a good description of fission observables using a modest number of adjustable parameters. The dashed line in Figure 1 is a semi-empirical equation (TKE = 171.5 -0.1E* for E* $>$ 9 MeV) suggested by Tudora et al. \\cite{tudy} Qualitatively the decrease in TKE with increasing neutron energy reflects the increase in symmetric fission (with its lower associated TKE release) with increasing excitation energy. This general dependence is reflected in the GEF code predictions with the slope of our data set being similar to the predictions of the GEF model but with the absolute values of the TKE release being substantially less. \n\nIn Figure 4, we show some typical TKE distributions along with Gaussian representations of the data. In general, the TKE distributions appear to be Gaussian in shape. This is in contrast to previous studies \\cite{PR,D} which showed a sizable skewness in the distributions.\n\nIn Figure 5, we show the dependence of the measured values of the variance of the TKE distributions as a function of neutron energy along with the predictions of the GEF model of the same quantity. The measured variances are larger than expected. \nAt low energies (near the second chance fission threshold) the observed variances show a dependence on neutron energy similar to that predicted by the GEF model, presumably reflecting the changes in variance with decreasing mass asymmetry. At higher energies (11-50 MeV) the variances are roughly constant with changes in neutron energy. Models \\cite{poop} would suggest that most of the variance of the TKE distribution is due to fluctuations in the nascent fragment separation at scission. The constancy of the variances is puzzling.\n\nUsing the Q values predicted by the GEF code, one can make a related plot (Fig. 6) of the TXE values in the $^{235}$U(n,f) reaction. The ``bump\" in the TXE at lower neutron energies is pronounced and the dependence of the TXE upon neutron energy agrees with the GEF predictions although the absolute values are larger.\n\n\\section{Conclusions}\n\nWe conclude that : (a) For the first time, we have measured the TKE release and its variance for the technologically important $^{235}$U(n,f) reaction over a large range of neutron energies (3.2 - 50 MeV). (b) The dependence of the TKE upon E$_{n}$ seems to agree with semi-empirical models although the absolute value does not. (c) Understanding the variance and its energy dependence for the TKE distribution remains a challenge.\n\n\\begin{acknowledgments}\n\nThis work was supported in part by the\nDirector, Office of Energy Research, Division of Nuclear \nPhysics of the Office of High Energy and Nuclear Physics \nof the U.S. Department of Energy\nunder Grant DE-FG06-97ER41026. One of us (WL) wishes to thank the [Department of Energy's]\n Institute for Nuclear Theory at the University of Washington for its hospitality\n and the Department of Energy for partial support during the completion of this work.\n This work has benefited from the use of the Los Alamos Neutron Science Center at the Los Alamos National Laboratory. This facility is funded by the U. S. Department of Energy under DOE Contract No. DE-AC52-06NA25396.\n \n \\end{acknowledgments}\n\\newpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Milky Way (MW) galaxy is one of the most important laboratories for studying galaxy formation and cosmology,\ngiven the abundant information available from its well-resolved constituents \\citep{Bland-Hawthorn2016a}.\nIn the current hierarchical structure formation framework,\nthe properties of a galaxy are tightly connected to the properties of its dark matter halo.\nTo place the MW in the context of cosmological galaxy formation,\none usually relies on the estimated size of the MW halo according to a certain definition of the halo boundary and the corresponding enclosed mass.\n\nDespite many efforts dedicated to measuring the mass distribution \nin the virialized region of the MW halo \\citep{Wang2019b} in observations, \nmuch less attention has been paid to the very outskirts beyond the formal virial radius.\nIn addition to the normally higher incompleteness and larger measurement errors for tracers at large distances,\nthe lack of equilibrium in this region also blocks dynamical modeling attempts based on the steady-state assumption~\\citep{oPDFI,oPDFII} and thus requires better theoretical understanding.\n\nConventionally, most studies use the classical\nvirial definition (or its variants) derived from the spherical collapse model \\citep{Gunn1972},\nwhich marks out a radius by a fixed enclosed overdensity under some idealized assumptions.\nHowever, a halo in the real universe is not abruptly separated from the neighboring environment at this specific radius.\nIn fact, the mass distribution within and around a halo is a continuous mixture of\nthe virialized content, the infalling materials, and background materials receding with the rest of the universe.\nThis fact has inspired people to further investigate other boundaries better separating these components\n(see \\citealt{Fong2020} for a detailed summary),\nsuch as the splashback radius \\citep{Adhikari2014,Diemer2014,Diemer2017,Aung2021}, \nthe depletion radius \\citep{Fong2020},\nand the turnaround radius \\citep[e.g.,][]{Gunn1972,Cuesta2008,Pavlidou2014,Faraoni2015},\nfrom the inside out.\nUnlike the spherical overdensity-based definition,\nthe latter boundaries are more directly associated with dynamical processes,\nand hence detectable from the kinematics of tracers \\citep[e.g.,][]{Deason2020,Bose2020,Tomooka2020}. \nThis is a particular advantage because \nwe can measure the velocity of tracers, e.g., nearby galaxies, even at a large distance,\nbut cannot observe the density directly.\n\nThe different halo radii definitions also serve to provide different insights on the structure and evolution of halos. In a recent work, \\citet{Fong2020} introduced the \\textit{inner depletion radius}, $r_{\\mathrm{id}}$, defined at the location of the maximum mass inflow rate, as the outer edge of the \\emph{growing} part of a halo. \nPractically, this radius is identifiable at the location of the maximum infall velocity (see Fig.~11 of \\citealt{Fong2020}) which is the approach we follow in this work.\nWith $r_{\\mathrm{id}}$ defined at the maximum inflow location, matter within $r_{\\mathrm{id}}$ gets deposited onto the halo as the infall rate slows down towards the inner halo.\nOutside this radius, however, matter is being pumped into the halo and gradually depleted due to the increasing infall rate towards the inner region. \nThis process leads to the formation of a relatively flat shoulder in the density profile and a trough in the bias profile around the $r_{\\mathrm{id}}$ scale (\\citealt{Fong2020}).\nThus, this location marks the transition between the halo being built up and the environment being depleted by halo accretion.\nMoreover, the enclosed density within this radius is found to have an approximately universal value, which enables us to easily estimate the enclosed mass.\n\nFrom the perspective of particle orbits, $r_{\\mathrm{id}}$ can be interpreted as a boundary enclosing a more complete population of splashback orbits than the customary \\textit{splashback radius} defined at the steepest slope radius, $r_{\\rm sp}$. The latter is based on the steepening in the slope resulted from the buildup of particles at their first orbital apogees, but it is found to enclose only about 75\\% of the splashback orbits \\citep{Diemer2017}.\nHence, $r_{\\mathrm{id}}$ is normally outside the splashback radius, $r_{\\rm sp}$, with $r_{\\mathrm{id}}\\approx 1.7 \\sim 2.6 r_{\\rm sp}$.\\footnote{This relation is converted combining the relations of $r_{\\mathrm{id}}\\approx 0.85 r_{\\rm cd}$ and $r_{\\rm cd}=2-3 r_{\\rm sp}$ in \\citet{Fong2020}, where $r_{\\rm cd}$ is the characteristic depletion radius defined at the minimum bias.}\nInterestingly, this scale is shown to be very close to (or $\\sim\\!\\! 15$ percent smaller than) the location of the minimum in the halo bias profile \\citep{Han2018} around the trans-linear scale and almost identical to the optimal halo exclusion radius measured by \\citet{Garcia2020} that defines the geometrical boundary of non-overlapping halos in the halo model description of the large-scale structure.\n\nCompared with the virial radius, $r_{\\mathrm{id}}$ is roughly located at the $1.6 \\R{200m}$, where the $\\R{200m}$ is the radius within which the average density is 200 times the mean background density.%\n\\footnote{\nSimilarly, $\\R{200c}$ and $\\R{vir}$ are defined as the radius within which the average density is 200 and $\\Delta_\\mathrm{vir}$ times the critical density of the universe, respectively, where $\\Delta_\\mathrm{vir}$ is the virial overdensity predicted from the spherical collapse model \\citep{Bryan1998}.\n}\nBy definition, the inner depletion radius at maximum infall is enclosed within the turnaround radius where the radial velocity reaches zero. The turnaround radius is of important dynamical significance as it separates infalling material from the expansion of the universe, and can serve as a probe of both halo evolution and the background cosmology~\\citep[e.g.,][]{Gunn1972,Cuesta2008,Pavlidou2014,Faraoni2015}.\n\nIn this work, we present the first measurement of the inner depletion radius of the MW\nusing the motion of nearby dwarf galaxies, along with the turnaround radius measured from the same data set.\nAlthough these radii were first introduced based on dark matter, galaxies are found to closely trace the underlying phase space structures of dark matter \\citep[e.g.,][]{Han2020,Deason2020} especially in the outskirts of haloes. As a result, we will use galaxies as tracers to probe these radii. The measurements are then compared directly with those using galaxies in hydrodynamical simulations, as well as with previous results using dark matter particles.\nUsing the scaling relation learned from halos in simulations,\nthe enclosed masses within these boundaries are also estimated. As these boundaries directly quantify the ongoing evolution of the MW halo, the measurements can provide crucial information for better placing the MW into a cosmological context of halo evolution and galaxy formation.\n\nThe structure of this letter is as follows.\nWe present the measurements of the MW's outer edges in \\refsec{sec:mw}, \ninterpret the results with simulations in \\refsec{sec:validate},\ncompare them with previous measurements in \\refsec{sec:compare},\nand summarize in \\refsec{sec:conclusion}.\nIn addition, we provide the details of measuring the velocity profile in Appendix \\ref{sec:gp}\nand selecting simulation sample in Appendix \\ref{sec:simu}.\n\n\n\\section{The outer edges of the MW}\\label{sec:mw}\n\n\nWe use nearby galaxies within $3\\mathrm{Mpc}$ of the MW, compiled from the catalog of the Local Volume galaxies \n\\citep{Karachentsev2013,Karachentsev2019}%\n\\footnote{\\url{http:\/\/www.sao.ru\/lv\/lvgdb\/tables.php}, updated on 2020-08-12}\nand the catalog of Nearby Dwarf Galaxies \\citep{McConnachie2012}%\n\\footnote{\\url{http:\/\/www.astro.uvic.ca\/\\~alan\/Nearby\\_Dwarf\\_Database.html}, updated on 2021-01-19}.\nThe observed Heliocentric line-of-sight velocities are converted into radial velocities in the Galactocentric rest frame.\nThe proper motions from the catalog of Nearby Dwarf Galaxies \n(mostly measured by \\citealt{McConnachie2020}) are used for the conversion when available.\nFor the remaining galaxies, we ignore their proper motions in the conversion considering their large distance.\nThe observational error of the line-of-sight velocity is typically smaller than several $\\mathrm{km \\, s}^{-1}$,\nwhich is negligible in this task compared with the bulk motion at several tens or hundreds of $\\mathrm{km \\, s}^{-1}$ level.\n\n\n\n\\begin{figure}[bt]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{LG_gals_vtot.pdf}\n\\includegraphics[width=0.47\\textwidth]{mw_profile_new.pdf}\n\\caption{%\n Top panel:\n Radial velocities of galaxies within 3 Mpc of the MW.\n Galaxies within $600\\mathrm{kpc}$ from M31 are marked as\n open circles and discarded in the analysis.\n The mean velocity profile (green solid curve) and its $1\\sigma$ uncertainty (green band) are computed from the remaining galaxies (filled circles).\n The measured MW edges including\n the \\textit{inner depletion radius} (i.e., location of maximum infall), $r_\\mathrm{id}$, \n and the \\textit{turnaround radius}, $r_\\mathrm{ta}$, are indicated by star symbols.\n The Hubble flow, $\\vr=H_0 r$, is shown by the dotted line for reference.\n Bottom panel:\n The MW mass profile.\n The star symbols indicate the estimated MW masses within the corresponding edges based on their typical enclosed densities in simulation. The estimates calibrated using a fiducial sample (black) or LG-like sample (gray, slightly shifted horizontally for clarity) of halos in simulation are shown separately (see the text for detail). Previous measurements of the inner MW mass profile using stars, globular clusters, and satellite galaxies \n are shown for comparison. As an extrapolation of the inner profile \\citep{Li2020}, the long-dashed (dash-dotted) curve shows the mean mass profile of the fiducial (LG-like) halos in the TNG100 simulation.\n The error bars or shades correspond to the 68\\% confidence intervals.\n}\n\\label{fig:local_group}\n\\end{figure}\n\n\nThe Galactocentric distances and radial velocities, $\\{r, \\vr\\}$, of these galaxies are shown in\n\\reffig{fig:local_group}. \nIn this work, we exclude galaxies within 600 $\\mathrm{kpc}$ from M31 (about $1.5 \\R{200m, M31}$) to reduce the potential influence of our massive neighbor. \nWe have also checked that our results are not very sensitive to the particular choice of this radius of exclusion.\nChanging the exclusion radius from 550 to 850 kpc only leads to a variation $\\lesssim 2\\%$ in the measured edges, while using a smaller value (e.g., 400 kpc) leads to slightly larger estimates (by factors of 5\\% in $r_{\\mathrm{id}}$ and 10\\% in $r_{\\mathrm{ta}}$). Note the six dwarf galaxies (Eridanus 2, Leo T, Pheonix, NGC 6822, Leo A, and Cetus) that lie in our inferred infall zone between 300 and 840 kpc are clearly not affiliated with M31, considering their large angular separation and distance from M31.\n\nIn order to extract the mean radial velocity profile, we model the distribution of radial velocities as a Gaussian distribution with a mean velocity, $\\bar v_r(r)$, and a velocity dispersion, $\\sigma_r (r)$. To obtain smooth estimates of the two, we adopt an iterative Gaussian process regression~\\citep{Rasmussen2005} method, which we briefly outline here but leave further details to Appendix \\ref{sec:gp}. Specifically, we first extract a rough estimate of the mean velocity profile $\\bar v_r(r)$ assuming a constant $\\sigma_r$ using Gaussian process regression. The estimated $\\bar v_r(r)$ profile is then combined with the observed velocities to obtain a radial-dependent velocity dispersion profile, $\\sigma_r(r)$. Finally, the $\\bar v_r(r)$ profile and its uncertainty is refined by fitting a Gaussian process with the estimated $\\sigma_r(r)$ profile as the noise term, in addition to a kernel that determines the uncertainty on the mean profile $\\bar v_r(r)$. By this process, we self-consistently obtain smooth estimates of $\\bar v_r(r)$, $\\sigma_r(r)$ as well as the uncertainty on $\\bar v_r(r)$.\n\nThe fitted $\\bar v_r(r)$ profile and its uncertainty are shown in the top panel of \\reffig{fig:local_group}.%\n\\footnote{See also Fig.~11 of \\citet{Deason2020} for a similar figure, where the $\\bar v_r(r)$ profile was obtained via the Savitzky--Golay smoothing algorithm and a slightly different galaxy sample. However, \\citet{Deason2020} focused on the slope of the $\\bar v_r$ profile rather than $\\bar v_r$ itself.}\nThe inner part of the profile is flat and consistent with zero net radial flow, as expected for the virialized part of the halo where the density remains largely static. On the largest scale, the positive radial velocity is dominated by the Hubble expansion of the universe. The profile crosses zero at the turnaround radius $r_{\\mathrm{ta}} \\simeq 840\\, \\mathrm{kpc}$, within which matter starts to fall towards the halo. Within this infall zone but outside the virialized region, the mean $\\vr$ profile exhibits a clear minimum that defines the depletion radius $r_{\\mathrm{id}} \\simeq 560\\, \\mathrm{kpc}$. The matter in between $r_{\\mathrm{id}}$ and $r_{\\mathrm{ta}}$ is being pumped into the region inside $r_{\\mathrm{id}}$, so $r_{\\mathrm{id}}$ unveils precisely the border where the MW is feeding on the environment. The amplitude of the maximum infall velocity is relatively small compared to the scatter of the velocities, revealing the MW halo is only growing at a very low rate.\n\nThe Gaussian process also enables a probabilistic way to asses the uncertainty in measuring the two characteristics, as it provides a posterior distribution of the entire profile.\nWe sample $10^4$ random realizations from the posterior of the velocity profile and measure the halo edges respectively.\nIn most ($>95\\%$) realizations, an infall region is detectable with $300\\mathrm{kpc} < r_{\\mathrm{id}} < 1000 \\mathrm{kpc}$.\nTaking their average and dispersion, we locate the inner depletion radius at $r_{\\mathrm{id}}=559\\pm 107\\, \\mathrm{kpc}$\nand turnaround radius at $r_{\\mathrm{ta}}=839\\pm 121\\, \\mathrm{kpc}$. \nThe maximum infall velocity is estimated to be $v_\\mathrm{inf, max}=-46_{-39}^{+24}\\mathrm{km s^{-1}}$, suggesting that our tentative detection of the infall zone is only at a marginal significance at about 2 $\\sigma$ level. \nThis is due to both the at most weak infall zone around the MW and the size of the uncertainty given the limited tracer sample size, the latter of which can be reduced by enlarging the nearby galaxy sample in future observation. Despite this, the infall region is also clearly detectable using other smoothing techniques such as the moving average or the Savitzky-\u2013Golay smoothing algorithm \\citep{Deason2020}.\n\nIt is worth pointing out that the above turnaround radius encloses the M31 (at $r=780 \\mathrm{kpc}$), the MW's massive companion.\nThough the M31 and its satellites are excluded from the analysis, \nthe M31 can perturb the velocity flow pattern in the vicinity and make the isovelocity surface anisotropic (e.g., \\citealt{Deason2020}).\nTherefore, our estimate of the turnaround radius should be viewed as a rough estimate in an spherically averaged sense. \n\n\n\\section{Interpreting the measurements with simulations}\\label{sec:validate} \n\nThe above measurements are compared with those of simulated halos in the state-of-the-art cosmological hydrodynamical simulation Illustris TNG100 as detailed in Appendix~\\ref{sec:simu}.\nFollowing similar procedures to those in \\refsec{sec:mw},\nfor each MW-sized halo in TNG100, \nwe identify the turnaround radius, $r_{\\mathrm{ta}}$, \nas the furthest zero velocity radius along the mean radial velocity profile\nand the inner depletion radius, $r_{\\mathrm{id}}$, as the furthest local minimum point within $r_{\\mathrm{ta}}$.\n\nUnlike the MW, for some halos (especially low-mass ones) we fail to locate a $r_{\\mathrm{id}}$\nbeyond the halo virial radius\n$\\R{vir}$ due to the lack of an infall region in the velocity profile \n(see also e.g., \\citealt{Cuesta2008,Fong2020}).\nWe exclude those halos without a detectable infall zone ($n=1517$) from the parent sample ($n=4681$) of MW-sized halos. We emphasize that the differing strength of the infall zone around halos of a given mass is itself an important diagnostic of the dynamical state and environment of the halo. By definition, halos without an infall region have halted their mass growth while those with one are still accreting.\n\nOur MW is observed to be embedded in a relatively cold environment dynamically, which we find to have a significant influence on the outer halo profile. To make a fair comparison, we select a \\emph{fiducial} sample of halos ($n=2153$) with similar masses and dynamical environments to the MW. Out of the fiducial sample, we further select an \\emph{LG-like} sample ($n=35$) with the additional requirement of having a close massive companion as detailed in Appendix~\\ref{sec:enviro}. \n\n\n\\begin{figure*}[hbtp]\n\\centering\n\\includegraphics[width=0.95\\textwidth]{radius_density.pdf}\n\\caption{%\n Halo edges and corresponding mean enclosed densities of simulated halos.\n The fiducial halo sample and the LG-like (paired) halos\n are shown as solid circles and squares, respectively.\n The fiducial sample is further divided into three halo mass bins, which are shown as open circles.\n The symbols and error bars correspond to the median and the $50\\pm34$th percentiles, respectively, with those of the fiducial sample also indicated by horizontal lines and bands for ease of comparison.\n In the top panels, the measurements of MW edges are also shown as star symbols for reference.\n}\n\\label{fig:estimate}\n\\end{figure*}\n\n\\begin{figure}[hbtp]\n\\centering\n\\includegraphics[width=0.47\\textwidth]{mah.pdf}\n\\caption{%\n Median mass growth history of the LG-like halos.\n The sample is divided into two equal subsets by $r_\\mathrm{id}\/\\R{200m}$.\n The shaded bands show the interval between the 20th to 80th percentiles.\n}\n\\label{fig:mah}\n\\end{figure}\n\nFor the fiducial sample, as shown in \\reffig{fig:estimate},\n$r_{\\mathrm{id}} \\sim 1.6 \\R{200m}$ with a mean enclosed density $\\bar\\rho( \\R{200m}$ are remarkably\nuniversal when the radius is normalized by $\\R{200m}$ (\\citealt{Diemer2014}, see also \\reffig{fig:vsig_profile} in Appendix),\nwhich allows us to make profile extrapolation with a reasonable precision.\nWe rescale the density profiles of the aforementioned simulated halos\nto the $\\R{200m,MW}$ measured in \\citet{Li2020}, \nas $\\rho_\\mathrm{scaled}(r')=\\rho_\\mathrm{original}(\\frac{r'}{\\R{200m,MW}}\\R{200m})$. \nTo take the uncertainty in $\\R{200m,MW}$ into account, the $\\R{200m,MW}$ value used to rescale each halo is drawn randomly from the posterior distribution of $\\R{200m,MW}$ each time. \nThe extrapolated profiles for the fiducial and LG-like halos are quite close within $r_{\\mathrm{id}}$,\nwhile the mass at larger scale for the LG-like halos is significantly higher due to the presence of the companion halo.\nBoth profiles are consistent with the mass estimates at our measured outer edges, although slightly closer to those adopting the fiducial enclosed densities. Note that the enclosed density within $r_{\\mathrm{id}}$ depends mostly on the location of $r_{\\mathrm{id}}\/\\R{200m}$ and is not sensitive to how the halo is selected as the profiles are largely universal around this scale. The slightly lower $M(1\/2$ have attracted a great deal of attention, since they exhibit a quantum phase transition between intriguing ground states that are manifested in respective magnetization curves as quantized magnetization plateaux and Luttinger spin liquids \\cite{yam99,sak99,hon00,yam00,sak02,ten11}. The intermediate magnetization plateaux of the mixed spin-(1\/2,$S$) Heisenberg chains should obey the quantization condition known as Oshikawa-Yamanaka-Affleck (OYA) rule $m_s-m$ = integer, where $m_s = S + 1\/2$ and $m$ are the total spin and total magnetization per elementary unit \\cite{oya97}. According to OYA rule, one of possible ways to increase the total number of magnetization plateaux may consist in increasing size of the constitutent spin $S$. It should be stressed, however, that OYA criterion provides just necessary but not sufficient condition for a presence of a magnetization plateau, whose actual existence has still to be verified by explicit calculations. \n\nAny bipartite quantum ferrimagnet (irrespective of spin magnitude and spatial dimensionality) should also satisfy the Lieb-Mattis (LM) theorem \\cite{lie62}, which assures the following total magnetization $m = S - 1\/2$ per unit cell within the zero-field ground state of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains. Hence, OYA criterion in combination with LM theorem would suggest that the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains may display one and just one quantized magnetization plateau (regardless of the spin size $S$) at the following fractional value of the total magnetization $m\/m_s = (2S-1)\/(2S+1)$ normalized with respect to its saturation value. In the present work we will provide a survey for zero-temperature magnetization curves of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains by considering a few different quantum spin numbers $S=1$, $3\/2$, $2$ and $5\/2$, which will prove all aforementioned features on this paradigmatic class of quantum spin chains. \n\n\\section{Model and method}\n\nLet us consider the mixed spin-$s$ and spin-$S$ quantum Heisenberg chain with regularly alternating spins $s=1\/2$ and $S>1\/2$ given by the Hamiltonian\n\\begin{eqnarray}\n\\hat{\\cal H} = J \\sum_{j=1}^L \\hat{\\bf S}_j \\cdot (\\hat{\\bf s}_j + \\hat{\\bf s}_{j+1}) - h \\sum_{j=1}^L (\\hat{S}_j^z + \\hat{s}_j^z),\n\\label{ham}\n\\end{eqnarray}\nwhere $\\hat{\\bf s}_j \\equiv (\\hat{s}_j^x,\\hat{s}_j^y,\\hat{s}_j^z)$ and $\\hat{\\bf S}_j \\equiv (\\hat{S}_j^x,\\hat{S}_j^y,\\hat{S}_j^z)$ denote the usual spin-1\/2 and spin-$S$ operators, respectively. The first term entering in the Hamiltonian (\\ref{ham}) takes into account the antiferromagnetic Heisenberg interaction $J>0$ between the nearest-neighbor spins and the second term $h = g \\mu_{\\rm B} H$ incorporating the equal Land\\'e g-factors $g_s = g_S = g$ and Bohr magneton $\\mu_{\\rm B}$ accounts for the Zeemann's energy of individual magnetic moments in an external magnetic field. It is noteworthy that the overall chain length is $2L$ as the elementary unit contains two spins, whereas the translational invariance is ensured by the periodic boundary condition $s_{L+1} \\equiv s_1$.\n\nOne should turn to some accurate numerical method in order to get a reliable survey of magnetization processes of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains, since the Hamiltonian (\\ref{ham}) is not integrable. For this purpose, we have implemented density-matrix renormalization group (DMRG) calculations from ALPS project \\cite{bau11}, which can be straightforwardly used to obtain the lowest-energy eigenvalue $E(T_{tot}^z, L, h=0)$ of the ferrimagnetic mixed-spin Heisenberg chain within each sector with the total spin $T_{tot}^z = \\sum_{j=1}^L (S_j^z + s_j^z)$ in a zero magnetic field ($h=0$). The lowest-energy eigenstate of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains in a non-zero magnetic field can be subsequently calculated from the formula $E(T_{tot}^z, L, h) = E(T_{tot}^z, L, h=0) - h T_{tot}^z$, because the total spin $T_{tot}^z$ is conserved quantity due to a validity of the commutation relation between the respective operator \nand the Hamiltonian (\\ref{ham}). The finite-size formula for a magnetic-field induced transition between the lowest-energy eigenstates with the total spin $T_{tot}^z$ and $T_{tot}^z+1$ then readily follows from the formula $h = E(T_{tot}^z+1, L, h=0) - E(T_{tot}^z, L, h=0)$. In this way one may obtain the accurate numerical results for the zero-temperature magnetization curves. To avoid extrapolation due to finite-size effects we have performed DMRG simulations for a sufficiently large system size with up to $L=64$ units (128 spins), whereas adequate numerical accuracy was achieved through 16 sweeps at the targeted system size when increasing the number of kept states up to 1200 during the final sweeps.\n\n\n\\section{Results and discussion}\n\n\\begin{figure}[t]\n\\includegraphics[width=1.05\\columnwidth]{fig1a.eps}\n\\includegraphics[width=1.05\\columnwidth]{fig1b.eps}\n\\vspace{-0.8cm}\n\\caption{The magnetization (left panel) and susceptibility (right panel) of the mixed spin-(1\/2,$S$) Heisenberg chains as a function of the magnetic field for four different spin values: (a)-(b) $S=1$; (c)-(d) $S=3\/2$; (e)-(f) $S=2$; (g)-(h) $S=5\/2$. The displayed results were obtained from DMRG simulations of a finite-size chain with $L=64$ units (128 spins).}\n\\label{fig1}\n\\end{figure}\n\nLet us proceed to a discussion of zero-temperature magnetization curves of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains, which are displayed on the left panel of Fig.~\\ref{fig1} for a few different quantum spin numbers $S$=1, 3\/2, 2 and 5\/2. It is quite evident from Fig.~\\ref{fig1} that all considered mixed-spin Heisenberg chains indeed exhibit exactly one intermediate magnetization plateau at the fractional value $m\/m_s =(2S-1)\/(2S+1)$, which is consistent with the gapped LM ferrimagnetic ground state. The intermediate plateau due to LM ferrimagnetism breaks down at a quantum phase transition invoked by the critical magnetic field $h_c$, which closes an energy gap above the ferrimagnetic ground state. It is noteworthy that the height of LM plateau monotonically increases with increasing the quantum spin number $S$ quite similarly as does its width terminating at the critical field $h_c = 1.76J$ for $S=1$, $h_c = 2.84J$ for $S=3\/2$, $h_c = 3.88J$ for $S=2$ and $h_c = 4.91J$ for $S=5\/2$. Above the critical magnetic field $h>h_c$ the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains pass towards the Luttinger spin liquid, where the magnetization rises continuously with the magnetic field until another quantum critical point is reached at the saturation field $h_s = J(1 + 2S)$. The asymptotic behavior of the magnetization in a vicinity of both quantum phase transitions is governed by the relations: $m \\propto \\sqrt{h - h_c}$ for $h \\to h_{c}^{+}$ and $m \\propto \\sqrt{h_s - h}$ for $h \\to h_{s}^{-}$. Owing to this fact, the quantum phase transitions driven by the magnetic field should be also reflected in anomalous behavior of the magnetic susceptibility $\\chi$ close to quantum critical points: $\\chi \\propto 1\/\\sqrt{h - h_c}$ for $h \\to h_{c}^{+}$ and $\\chi \\propto 1\/\\sqrt{h_s - h}$ for $h \\to h_{s}^{-}$. In accordance with this statement, the magnetic-field dependences of the susceptibility shown on the right panel of Fig.~\\ref{fig1} furnish evidence for both field-induced quantum phase transitions towards the Luttinger spin liquid through the observed divergence of the magnetic susceptibility. \n\n\n\\section{Conclusions}\n\nThe zero-temperature magnetization curves of the ferrimagnetic mixed spin-(1\/2,$S$) Heisenberg chains were calculated with the help of DMRG method for several values of the quantum spin number $S$. It has been verified that the magnetization curves involve due to the gapped LM ferrimagnetic ground state one and just one intermediate plateau at the fractional magnetization $m\/m_s =(2S-1)\/(2S+1)$, \nwhich breaks down at a quantum phase transition towards the Luttinger spin liquid driven by the external magnetic field. Subsequently, the magnetization continuously rises with increasing the magnetic field within the Luttinger spin-liquid phase until it reaches the full moment at the saturation field $h_s = J(1 + 2S)$ closely connected with another field-induced quantum phase transition. It has been demonstrated that the magnetization shows a cusp and susceptibility diverges in a close vicinity of both quantum critical points. Besides, it could be concluded that the rising quantum spin number $S$ increases in the magnetization curve of the mixed spin-(1\/2,$S$) Heisenberg chains the height as well as width of the ferrimagnetic LM plateau, while the magnetic-field range corresponding to the gapless Luttinger spin-liquid phase is conversely reduced. Last but not least, it is worth noticing that theoretical implications of the present work are of obvious relevance for series of bimetallic coordination compounds MM'(pba)(H$_2$O)$_3$ $\\cdot$ 2H$_2$O \\cite{kah87} and MM'(EDTA) $\\cdot$ 6H$_2$O \\cite{dri85} (M,M' = Cu, Ni, Co, Mn), which represent experimental realization of the ferrimagnetic mixed-spin Heisenberg chains. However, the high-field magnetization measurements on these or related series of bimetallic complexes are desirable for experimental testing of the present theoretical predictions. \n\n\\section{Acknowledgement}\nThis work was financially supported by Ministry of Education, Science, Research and Sport of the Slovak Republic provided under the grant No. VEGA 1\/0043\/16 and by the grant Slovak Research and Development Agency under the contract No. APVV-0097-12.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe AlphaZero algorithm and its variations have been highly successful in producing high-quality moves in complex strategy games, including chess, Go, and shogi \\cite{silver2017mastering,silver2018general,schrittwieser2020mastering}. Here \"high quality\" is measured in terms of playing against the best human players and the best available computer software. AlphaZero's self-play algorithm was trained on powerful computer hardware and achieved superhuman performance in less than 24 hours. \n\nAlphaZero's groundbreaking strategies in chess and Go have taken the game playing communities by storm \\cite{sadler2019game}. \nThis sentiment is expressed by former chess world champion Garry Kasparov who wrote {\\it chess has been shaken to its roots by AlphaZero, but this is only a tiny example of what is to come}. Or as Matthew Sadler and Natasha Regen write in \\cite{sadler2019game} {\\it It is momentous for chess players because, for the first time, we can learn from a powerful inteligence which build its chess strategy independently of our own rich history of chess development}. \n\nAlphaZero has not been made generally available; however, the open-sourced projects LC0 for chess, Leela Zero for Go, and AobaZero for shogi, game playing agents have, in essence, replicated AlphaZero \\cite{cazenave2020polygames}. These projects outsourced the demanding computational task of training the game playing agents to communities of computer strategy game enthusiasts. The participants would typically lend the project computer power directly by contributing hardware resources or running a script provided on the google colab cloud platform.\nRecently, a variant of AlphaZero has been proposed and published as open-source \\cite{wu2019accelerating}. The open-source Go project Elf is also worth mentioning \\cite{tian2019elf}.\n\nImpartial games are games in which the allowable moves depend only on the position and not on which of the two players is currently moving. The class of impartial games is an important subclass of combinatorial games \\cite{berlekamp2001winning,berlekamp2002winning,berlekamp2003winning,berlekamp2004winning}. Impartial games include take-and-break games, subtraction games, heap games, and poset games. It includes children's games as well as mathematical games, including sprout, treblecross, cutcake, guiles, wyt queens, kayles, guiles, grundy's game, quarto, cram, chomp, subtract a square, notakto, and nim \\cite{berlekamp2001winning,berlekamp2002winning}. Many impartial games have multiple variants. The analysis of impartial games is called nimber theory, and the Sprague\u2013Grundy theorem states that every impartial game is equivalent to a nim-heap \\cite{berlekamp2001winning}. Thus the game nim plays a central role in the theory of impartial games, and a large class of impartial games can mimic nim (or parts of nim). From an AI point of view, it turns out that many impartial games are as hard (or much harder \\textit{e.g.} node kayles and geography that are PSPACE complete \\cite{schaefer1978complexity}), than nim. Nim is a game often played by children.\n\nDespite AlphaZero's groundbreaking advance for complex games, it turns out that there are games like nim that, from a human point of view, as well as from a complexity point of view, are simple, where AlphaZero style algorithms essentially seems to be unable to learn to play better than a randomly playing agent. On specific boards or towards the end of the game, the algorithm have enough resources to play well by essentially using exhaustive search. However, on larger nontrivial positions the policy network generally fails to provide any valuable information to guide the Monto Carlo Three Search (MCTS) algorithm, and the value network fails to evaluate the board positions any better than random.\n\n Nim is played by two players who take turns removing counters from distinct heaps \\cite{bouton1901nim, nowakowski1998games}. A player must remove at least one counter on each turn and may remove any number of counters, provided they all belong to the same heap. The goal of the game is to be the player who removes the last counter \\textit{i.e.} leaves the empty position to the opponent.\n\nNim is often classified as a mathematical game, and it has a well defined mathematical solution.\nFormally, the initial board of nim can be represented as an array: \\[ [n_1,n_2,....,n_k]\\]\n\nwhere $n_j \\in \\{0, 1, 2, \\dots\\}$ and $j=1, 2, \\ldots, k $.\nA position in a game using that board can be represented as an array\n\\[ [v_1, v_2, ...., v_k] \\]\n\nwhere $v_j \\in \\{0,1,2,\\dots\\}$ and $v_j\\leq n_j$ for $j \\in \\{1, 2, \\ldots, k\\}$. Often a nim position is specified without reference to a board, however, we always specify the intial position of the board since the algorithm requires a fixed board size, and each self-play game starts with the initial position of the board. Fig. \\ref{fig:nim_board} demonstrates an example of an initial nim board, a board positions during play and the final board position. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/Nim_board.eps}\n\\caption{\\label{fig:nim_board} The initial board consists of a series of heaps (\\textit{aka} rows or piles) of counters (\\textit{aka} lines or matches). The initial board, as shown in the left graph, is $[n_1,n_2,....,n_k]=[1,3,5,7,9]$. The two players take turns in removing counters, resulting in one of the positions in the game play that is $[v_1, v_2, ...., v_k]=[1,2,4,4,3]$, as shown in the middle graph. In the usual version of the game, the player who removes the last counter(s) wins, as shown in the right graph where all the heaps are cleared. }\n\\end{figure}\n\nThere are $n=\\sum_{j} v_j$ legal moves from the position $[v_1,v_2,....,v_k]$. For example, the number of legal moves from the initial position of nim played on the board \\[ A = [1,2,3,\\ldots,9] \\] is $45$. \nThe number of legal moves from the initial position of nim played on the board \\[B=[1,2,3,\\ldots,25] \\] is $325$. This exceeds the number of legal moves in any Go position on the standard $19 \\times 19$ board.\n\nThe maximal number of moves in nim on the initial board $[n_1,n_2,....,n_k]$ is $m = \\sum_j n_j$ as each player needs to remove at least one counter for each move. Games of chess and Go typical last less than 300 moves \\footnote{In chess a move usually include a move by each player, and thus a typical chess game lasting $n$ chess moves, in fact last $2n$ moves}.\nThus, nim on the board $B$ has a branching factor and game length compatible to Go and exceeding the branching factor and length of a typical chess game. \nNotice that size of the game tree of nim typically hugely exceeds that of the number of possible positions (states) which is given by \\[ \\Pi_j (1+n_j)\\]\n\nFor any nim position it is (mathematically) easy to determine which player will win and which winning moves are available to that player. The value (won or lost) of a nim position can be determined by calculating the binary digital sum of the number of counters in the heaps, \\textit{i.e.}, the sum (in binary), neglecting all carries from one digit to another \\cite{bouton1901nim}. Within combinatorial game theory this is commonly referred to the \\textit{nim-sum} (see \\ref{sec:Impartial games} for more details).\nThe complexity of this calculation can be shown to be linear time and with the use of only logarithmic space memory.\n\nBased on theoretical considerations related to {\\it the statistical neutrality} of the parity function \\cite{thornton1996parity} as well as some subtle and heuristic arguments that the defense has the ability to keep preventing the attacking player from forcing the game into readily predictable patterns, we conjectured that AlphaZero style learning (without any addition of special customised trick and features) in practice would not be able to learn nim (and consequently also numerously other impartial games) on large boards. \n\nBased on our preliminary experimental results from supervised learning we expected that the policy network in AlphaZero style nim would be unable to properly learn to play high quality nim on asymptotically large boards like \\textit{e.g.} $[1,2,3,\\ldots,n]$ for $n>100$. To our great surprise we discovered that the difficulty was much larger than anticipated and that the policy network and evaluation networks consistently failed to converge even on small boards \\textit{e.g.} $[1,3,5,7,9,11,13]$. \n\nIn games like chess, AlphaZero's value network fails to be perfect, which is typically not a serious issue as it is compensated by the search guided by the policy network. It turns out that imperfection is not necessarily a problem on some nim boards since the policy network essentially can help the winning player to force the game into positions that do not require any accurate evaluations. However, there is no such luck in general as there are two difficulties: \n\n\\begin{enumerate}[label=(\\arabic*), noitemsep]\n \\item The policy network is unable to guide the MCTS better than random \\label{enu:one}\n \\item The value network is unable to evaluate a position better than random \\label{enu:two}\n\\end{enumerate}\n\nAccording to \\ref{enu:one}, the network essentially cannot learn to select relevant candidate moves necessary to identify winning or high-quality moves. \nAccording to \\ref{enu:two}, even if the search is able to look many moves ahead, any evaluation is meaningless as it in general does not perform better than random. \n\nWe investigated the difficulties in \\ref{enu:one} and \\ref{enu:two} \nand found that if a small part of a position in an impartial game is blanked out, the prediction of the policy network and the evaluation of the value network can typically not be better than pure random guessing. \n\nAs a comparison, the general evaluation of the networks might be wrong if we cover part of a Go, chess or shogi board. However in general the visible part of the board contains information that is positively correlated with the correct evaluation of the full board. However, for impartial games like nim, any board covering where a small but unknown number of counters is covered makes it impossible to evaluate the resulting position correctly. It is typically impossible to predict whether the position is good or bad (won or lost) as there is typically zero correlation between the visible part of a partly blanked out position and its correct evaluation. \nThis type of heuristic argument shows that even a small level of noise typically erase the correlations needed to bootstrap the positive feedback mechanism of an AlphaZero style algorithm.\n\nThe subclass of combinatorial games of so-called partisan games can occationally pose issues similar to that of impartial games \\textit{e.g.} Dawson's chess that in effect is a impartial game in disguise \\cite{berlekamp2001winning}.\n\nTo better understand the difficulties of learning nim-like games, we decided to revisit the AlphaZero for chess, Go and shogi projects and consider positions where simple parity-like problems are needed to correctly evaluate a position. Since chess is probably the most commonly known game in the English speaking world, and the official AlphaZero program is unavailable, we mainly used a highly trained the Leela Chess Zero (LC0) engine. \n\nWe investigated and discussed many theoretical and practical issues and limitations in this paper. To summarize, our main contributions are:\n\\begin{itemize}[noitemsep]\n \\item Identify a class of strategic games that require nontrivial modifications of AlphaZero style algorithms or in general current RL algorithms to work.\n \\item Discovery that the difficulty for RL algorithms to learn impartial games is much more dramatic than theoretical analysis of the individual components, the value network (see section \\ref{sec:value_network}), the policy network (see example 3 in section \\ref{sec:policy_network}) or the parity-related issue (see section \\ref{sec:low_complexity}) would suggest. Noise brought by these issues has a dramatically compounding negative effect, showing learning difficulties with less than ten piles instead of 50+ piles (see section \\ref{sec:conclusion}).\n \\item Propose two levels of mastery of a RL agent, champion and expert, which are essential in evaluating the performance of an agent (see section \\ref{sec:differnt}). To the best of our knowledge these concepts are not mentioned in the literature.\n \\item Experimentally demonstrate the robustness of the problems through an extensive list of experiments and show that fiddling with the hyperparameters does not have much effect on alleviating the difficulties. \n \\item Briefly outlining of how one might expand the AlphaZero paradigm approach for future algorithms.\n\\end{itemize}\n\nThe paper is structured as follows. In section \\ref{sec:revisiting_alphazero} we revisit AlphaZero and LCZero and look behind some of the fantastic magic moves these chess programs have produced. The main point of this analysis is to identify the strength and weaknesses of LCZero. LCZero's defects often are hidden and mainly appear behind the scene and only occurs when we look under the hood. However, we will see that the weaknesses include issues impacting both the value and policy networks. \n\nSection \\ref{sec:background} shows how many impartial games can be boiled down to the nim and concerns theoretical bottlenecks and general limitations for AI solving various two-player games with full information. We show that these results all follow well-known results in complexity theory. The section also includes reference to our experimental results on modelling the parity function with neural networks (NNs).\n\nIn section \\ref{sec:differnt}, we propose two types of optimality that lead to two levels mastery, namely champion and expert. We illustrate the distinction with one analogy and one special nim example we also tested experimentally.\n\nIn section \\ref{sec:prelimiary}, we present the empirical results of the performance of the value network on modelling parity function and the inability of the policy network to modelling nim-sum. In section \\ref{sec:reinforcement_learning}, we give an overview of the AlphaZero algorithms and the changes we made tailored to nim and demonstrate the difficulty of our AlphaZero style nim algorithm have in becoming an expert agent on large boards. We finish the paper with some general remarks, conjectures, and directions for further research.\\footnote{The code for the experiments in this paper is publicly available at: \\url{https:\/\/github.com\/sagebei\/Impartial-Games-a-Chanllenge-for-Reinforcement-Learning}}\n\n\\section{Revisiting AlphaZero and LCZero}\n\\label{sec:revisiting_alphazero}\nAlphaZero, which astonished the chess world, has essentially been replicated by LCZero. This section focuses on a highly trained version of LCZero. In general, LCZero can access the computational resources that the programs need to run Monte Carlo Tree Search (MCTS) simulations and replicate the moves made by AlphaZero chess. LCZero\\footnote{version: Last T60: 611246 (384 $\\times$ 30)} uses the neural network composed of $30$ blocks $\\times$ $384$ filters which are beyond the initial $20 \\times 256$ architecture of AlphaZero, and LCZero additionally employs the Squeeze-and-Excitation layers to the residual block and supports endgame tablebases \\cite{maharaj2021chess}, enabling it to possibly surpass the original AlphaZero algorithm. The older versions of LCZero running on a $20 \\times 256$ architecture are somewhat weaker than the later versions running on the $30 \\times 384$ architecture. \n\nFor comparing the moves, we also gained access to open-source Stockfish 14, which, like Stockfish 8 developed initially by Tord Romstad, Marco Costalba, and Joona Kiiski, and have been further developed and maintained by the Stockfish community. Unlike Stockfish 8, stockfish 14 also comes with an NNUE (Efficiently Updatable Neural Network) \\cite{nasu2018efficiently} for its evaluation and thus combines features of deep neural networks with traditionally handcrafted chess engines. Game playing strength is usually measured by Elo rating (see section \\ref{sec:results} for the detailed description). There has been an attempt to measure playing strength based on the quality of the played chess moves rather than on the results \\cite{regan2011intrinsic}. On the chess engine rating list rating (August 7, 2021) the latest iteration of Stockfish 14 has 3555, Stockfish 8 has 3375 and LCZero 3336. LCZero running on hardware similar to AlphaZero against a special fixed time (1 minute) for each move version of Stockfish 8 has been shown to lead to a similar dramatic achievement as AlphaZero.\n\nThe revolutionary and outstanding evaluations and moves have taken the chess world by storm. Often AlphaZero and LC0 would evaluate positions quite differently from human players or traditionally hand-coded chess engines. \n\nAlthough AlphaZero is not generally available, its detailed evaluation of some chess positions has been made public \\cite{sadler2019game}. AlphaZero's use of its policy network, its evaluation networks and MCTS were explained by a detailed analysis of the position in Fig. \\ref{fig:LC0}. A highly trained LC0 using the net T60: 611246 (384x30) straight away thought that d5 was the move to give the most attention, which can be seen from the table of the policy network's prior probabilities. AlphaZero had a different training history, so the prior probabilities differed, but AlphaZero agrees that d5 is a promising move. After investigating 64 and 256 nodes \\footnote{More specifically, the number of new nodes visited, which corresponds to the number of MCTS simulations}, AlphaZero is highly optimistic about the move, while LC0 is somewhat less happy but agrees that white is better. \n\n\\begin{figure}[H]\n \\label{fig:LC0}\n \\centering\n \\subfloat[chess board position]{\n \\newgame\n \\fenboard{r1b2qk1\/1pp3rp\/4pnp1\/1PP5\/p2PBp2\/P7\/1BQ2P2\/K1R1R3 w - - 0 1}\n \\scalebox{0.7}{\\showboard}\n }\n \\hspace{1em}\n \\subfloat[Prior probabilities for the top moves]{\n \\begin{tabular}{lcccccccc}\n \\toprule\n \\textbf{Move} & \\bishop d3 & \\bishop f3 & c6 & d5 & \\bishop g2 & f3 & \\bishop h1 & \\queen c4\\\\ \n \\midrule\n \\textbf{Prior prob (AlphaZero)} & 29.77\\% & 18.82\\% & 16.15\\% & 10.21\\% & 4.75\\% & 3.5\\% & 4.75\\% & 1.2\\% \\\\\n \\midrule \n \\textbf{Prior prob (LCZero)} & 6.01\\% & 12.36\\% & 16.27\\% & 23.13\\% & 1.74\\% & 3.73\\% & 1.41\\% & 8.68\\%\\\\\n \\bottomrule\n \\end{tabular}\n \n }\n \\hspace{1em}\n \\subfloat[AlphaZero and LCZero win probabilities after MCTS]{ \\begin{tabular}{lccccccc}\n \\toprule\n \\textbf{Move} & \\bishop d3 & \\bishop f3 & c6 & d5 & \\bishop g2 & f3 & \\bishop h1 \\\\ \n \\midrule\n \\textbf{Win prob (AlphaZero 64 nodes)} & 60.1\\% & 64.5\\% & 77.3\\% & 87.1\\% & 61.6\\% & 67.3\\% & 61.6\\% \\\\\n \\midrule \n \\textbf{Win prob (AlphaZero 256 nodes)} & 60.1\\% & 64.5\\% & 77.7\\% & 83.1\\% & 61.6\\% & 67.3\\% & 61.6\\% \\\\\n \n \\midrule \n \\textbf{Win prob (LCZero 64 nodes)} & 62.8\\% & 62.8\\% & 71.2\\% & 71.6\\% & 55.7\\% & 55.7\\% &55.7\\% \\\\\n \\midrule \n \\textbf{Win prob (LCZero 256 nodes)} & 59.0\\% & 62.2\\% & 63.3\\% & 67.8\\% & 50.0\\% & 58.2\\% & 50.0\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\hspace{1em}\n \\subfloat[Comparison after MCTS. The move d5 was considered to be the best in all cases]\n { \\begin{tabular}{lccccc}\n \\toprule\n \\textbf{Nodes visited} & 64 & 256 & 1024 & 65536 & 4194304 \\\\ \n \\midrule\n \\textbf{AlphaZero eval} & 65.7\\% & 74.1\\% & 71.6\\% & 67.9\\% & 73.5\\% \\\\\n \\midrule \n \\textbf{LCZero eval} & 70.1\\% & 65.0\\% & 67.4\\% & 68.3\\% & 73.1\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{\\label{fig:LC0} LCZero and AlphaZero very fast agree that d5 is the best move, and though both engines discover various defensive resources, they reach similar evaluations.}\n\\end{figure}\n\n\nAt deeper MCTS, AlphaZero begins to discover various defensive resources, so its win probability begins to drop. The win probability for a position $s$ is calculated by $(0.5 + v\/2)$\\% where the $v$ is a scalar output from the value network with position $s$ as its input. When more nodes are examined, the win probabilities climb up again. Eventually, after 4194304 nodes, the two engines had almost identical evaluations with win probabilities of 73.5\\% resp. 73.1\\%.\nIn general, LC0 plays very similar to AlphaZero, and it is fair to say AlphaZero's play fully have been replicated by LC0.\n\n\nThe quality of evaluation of AlphaZero for chess is discussed in great detail in \\cite{sadler2019game}. The moves selected by LC0 without search (\\textit{i.e.} when only one node is visited by the MCTS algorithm) plays remarkably well and is occasionally able to win games against strong chess players. \n\nWhen we compare a move made by a human with the one selected by LCZero, the human typically looks at far fewer positions but humans pattern recognition sometimes spots features that are missed by LCZero's value network. A human grandmaster maximally considers a few hundred moves, \\textit{i.e.} significantly fewer moves than LCZero, that in turn considers dramatically fewer moves than conventional chess programs. As an example to illustrate the human superiority but also to understand limitations that becomes relevant for judging impartial games, consider the position in Fig. \\ref{fig:LC1}. Any decent chess player can \"see\" intuitively - essentially without any calculation - that white has a won position leading to mate (1.\\queen g8+,\\rook g8 and 2.\\knight f7+ mate). But the policy network typically (it is a stochastic process) first lead the program slightly astray and spend time investigating the positions arising after 1.\\knight f7+. It is important to stress that LCZero finds the winning sequence in less than a millisecond. The point made here is that neither the value network nor the policy network in general evaluates positions completely accurately. This fact is crucial for understanding the limitations for impartial games where even the slightest change in position (or noise) completely wipe out any positive correlation between the policy and position evaluations and the correct ones.\n\n\\begin{figure}[h]\n \\centering\n \\subfloat[board position]{\n \\newgame\n \\fenboard{1rr4k\/6pp\/7N\/3Q4\/8\/qPp5\/P1P5\/1K6 w - - 0 1}\n \\scalebox{0.8}{\\showboard}\n }\n \\hspace{0.1em}\n \\subfloat[LCZero evaluations for the top 4 moves]{\n \\begin{tabular}{lccc}\n \\toprule\n \\textbf{Move} & \\knight f7 & \\queen g8 & \\queen d8 \\\\ \n \\midrule\n \\textbf{Prior Prob} & 60.69\\% & 15.96\\% & 2.50\\% \\\\\n \\midrule\n \\textbf{V-value} & -0.276 & 0.588 & -0.968 \\\\\n \\midrule\n \\textbf{Win prob} & 36.2\\% & 79.4\\% & 1.6\\% \\\\\n \\midrule\n \\textbf{Q-value (15 nodes)} & -0.075\\% & 1 & 0 \\\\\n \\midrule\n \\textbf{Win prob (15 nodes)} & 48.5\\% & 100\\% & 0\\% \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{2em}\n }\n \\caption{\\label{fig:LC1} White has a forced win in two moves where he sacrifices the queen before mating with his knight. It is not surprising LCZero evaluation with no search (\\textit{i.e.} with just one node visited) is unable to reach the conclusion that white has a forced win. Only after 11 nodes does the program jump away from investigating lines beginning with \\knight f7, and switch to investigate the winning move \\queen g8. And only while at node 15 does the program find the forced mate.}\n\\end{figure}\n\nThe next example Fig. \\ref{fig:LC2} illustrate the difficulty in handling parity related problems. On the right part of the position, any player who moves would lose the game. The situation on the left side of the board is equivalent to the nim position $[3,2]$ consisting of two heaps with three and two counters, respectively and where it is only possible to remove one or two counters from a heap. This version of nim is sometime referred to as bogus nim. The winning move is to remove one counter from the heap with three counters. Analogously, the winning move for the white is making the move c3-c4. But LCZero's policy network suggests a2-a4, which disastrously leads to a losing position. Like in the previous example it is important to stress that LCZero finds the winning sequence almost immediately, and the point is that neither the value network nor the policy network in general are able to evaluate positions completely accurately.\n\n\\begin{figure}[H]\n \\centering\n \\subfloat[board position]{\n \\newgame\n \\fenboard{6k1\/2p5\/5P1P\/p7\/8\/2P2p1p\/P7\/6K1 w - - 0 1}\n \\scalebox{0.8}{\\showboard}\n }\n \\hspace{0.1em}\n \\subfloat[LCZero evaluations for the top 4 moves]{\n \\begin{tabular}{lcccc}\n \\toprule\n \\textbf{Move} & a4 & c4 & a3 & f7 \\\\ \n \\midrule\n \\textbf{Prior prob} & 50.4\\% & 20.7\\% & 15.4\\% & 4.4\\% \\\\\n \\midrule\n \\textbf{V-value} & -0.632 & 0.806 & -0.817 & -0.88 \\\\\n \\midrule\n \\textbf{Win prob} & 18.4\\% & 90.3\\% & 9.2\\% & 6\\% \\\\\n \\midrule\n \\textbf{Q-value (3 nodes)} & -0.632\\% & 0.806 & -0.248 & -0.248 \\\\\n \\midrule\n \\textbf{Win prob (3 nodes)} & 18.4\\% & 90.3\\% & 37.6\\% & 37.6\\% \\\\\n \n \\bottomrule\n \\end{tabular}\n \\vspace{2em}\n }\n \n \\caption{\\label{fig:LC2} A chess position that mimics a nim position $[3,2]$. Neither black or white would like to move on the right hand side of the board. A simple analysis conclude that white has to play c3-c4 which is winning. Any other move leads to a loss. The LC0 policy network gives the move a2-a4 a score of 50.4\\%. The second most promising move is c3-c4 which scores 20.7\\%. The positions Q-value is \n -0.083 which corresponds to a win probability of 45.5\\% indicates a slight advantage to black while in fact white is winning.\n Notice that LC0's value network already judge the position after c4 very favorable for white (which we find this quite impressive). Though it is a stochastic process LC0's MCTS typically needs only to investigate few nodes before it considers c3-c4 which it then likes straight away}\n\n\\end{figure}\n\nDespite that LCZero is equipped with rather incredible policy and value networks, in general, it cannot accurately evaluate positions related to parity issues, including \"zugzwang\", \"waiting moves\", and \"triangle manoeuvres\", etc. that are common themes in chess. This, however, is not an issue for AlphaZero or LCZero as the issues effectively are handled by the MCTS. However, some examples show that the policy network occasionally might fail to properly consider crucial key moves.\n\nAs an example consider the board position in Fig. \\ref{fig:LC3} that occured in a game between Stockfish 12 and LCZero. The white player can force checkmate in a sequence of 5 moves, starting with \\rook c2. However, the policy network give the move of \\rook a5+ the highest prior probability. The serious problem is that LC0's policy network fails to guide the MCTS propertly, and after more than 1 million nodes LC0 were unable to find the winning sequence. The failure to find the winning sequence is partly a drawback of using MCTS, while alpha-beta search that almost universally is used in conventional chess engines, typically finds the forced mate almost instantly. \n\n\\begin{figure}[H]\n \\centering\n \\subfloat[board position]{\n \\newgame\n \\fenboard{6r1\/5p2\/1r6\/2kpPb2\/Rp1p4\/3P2P1\/1R3PK1\/3B4 w - - 0 1}\n \\scalebox{0.8}{\\showboard}\n }\n \\hspace{0.1em}\n \\subfloat[LCZero evaluations for the top 4 moves]{\n \\begin{tabular}{lcccc}\\hline\n \\toprule\n \\textbf{Move} & \\rook a5 & \\rook c2 & \\bishop e2 & \\rook b3 \\\\ \n \\midrule\n \\textbf{Prior prob} & 35.9\\% & 16.5\\% & 10.3\\% & 4.7\\% \\\\\n \\midrule\n \\textbf{V-value} & 0.29 & 0.27 & -0.005 & 0.06 \\\\\n \\midrule\n \\textbf{Win prob} & 64.5\\% & 63.5\\% & 49.8\\% & 53.0\\% \\\\ \n \\bottomrule\n \\end{tabular}\n \\vspace{3em}\n }\n \\caption{\\label{fig:LC3} LCZero chess played Bf5 which is a blunder. LCZero fails to realize that white has a forced mate in 5 moves. In the diagram position the highly trained LCZero fails to find (even after have looked at more than a million nodes) the forced mate: 1.\\rook c2+,\\king b5 2.\\rook a5+!! 2.- \\king a5 3. \\rook a2+,\\king b5 4.\\bishop a4+, and now 4.-\\king c5 5. \\rook c2+mate, or 4.-\\king a5 or 4.-\\king a6 followed by 5.\\bishop c6+ mate.}\n\\end{figure}\n\n\\section{Background and related work}\n\\label{sec:background}\n\n\\subsection{Impartial games and nim}\n\\label{sec:Impartial games}\nAn impartial game is a two-player game in which players take turns to make moves, and the actions available from a given position do not rely on whose turn it is. A player lose if they cannot make a move on their turn (\\textit{i.e.} a player wins if they move to a position from which no action is possible). \n\nIn impartial games, all the positions can be classified into losing or winning positions, in which the player to move have no winning move or has at least one winning move. \n\nSprague-Grundy theorem states that every (finite) impartial game is equivalent to a one-heap game of nim. More specifically each board position in impartial games has a nimber, which is also called the Sprague-Grundy value. For every position the nimber $G(s)$ is defined as \n\n\\begin{equation}\n G(s) = \\text{mex}(\\{G(s')\\}: s' \\in N(s)) \n\\end{equation}\n\n\\noindent\nwhere $s' \\in N(s)$ denotes all the state $s'$ that can be reached by from $s$ in one legal play and the output of a mex function is the minimum excluded value from a set, which is the least non-negative integer not in the set \\cite{beling2020pruning}.\nA position with Sprague-Grundy value $G(s)$ is equivalent with nim heap with $G(s)$ counters. A position is lost exactly when its Sprague-Grundy value is $0$.\n\nThe Sprague-Grundy value of a nim position $[v_1,v_2,\\ldots,v_k]$ is given as is the binary digital sum of the heap sizes $v_1,v_2,\\ldots,v_k$ , (in binary) neglecting all carries. In combinatorial game theory this sum is often called the nim-sum.\nThus it is computationally easy to decide if a given nim position is won or lost. \n\nThe analysis of impartial games are closely linked to the nim-sum which in turn is linked to the parity function. Thus the parity function plays implicitly (or explicitly) a central role in the theory of impartial games as such games often are able to mimic nim or parts of nim. To illustrate this consider the impartial game called sprout \\cite{berlekamp2001winning, gardner1967mathematical} \ninvented by John Conway and Michael Paterson.\n\nPositions in sprout typically have nimber values $0,1,2$ and $3$ \\cite{berlekamp2003winning}. The position in Fig. \\ref{fig:sprout_a} has nimber value 3. The Sprout position in Fig. \\ref{fig:sprout_b} also has nimber value $3$, but it has been modified so it becomes an \"isolated land\" that cannot interact with anything on the outside. It follows that a sprout starting position consisting of $n$ copies of the gadget in \\ref{fig:sprout_b} can mimic any nim position that can arrive from a staring positions with $n$ heaps with $3$ counters. \n\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}[b]{0.25\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/sprout_small.eps}\n\\caption{sprout position}\n\\label{fig:sprout_a}\n\\end{subfigure}\n\\hspace{3em}\n\\begin{subfigure}[b]{0.35\\textwidth}\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figures\/sprout.eps}\n\\caption{sprout position serving as a gadget}\n\\label{fig:sprout_b}\n\\end{subfigure}\n\\caption{\\label{fig:Sprout} nim played on a board $[3,3,3,\\ldots,3] $ with $n$ heaps, is equivalent to sprout with $n$ copies of the sprout position (gadget) in (b) in the Figure (see the diagram on p599 in \\cite{berlekamp2003winning} for details).}\n\\end{figure}\n\nUnlike nim, some impartial games cannot be solved by a simple calculation. This follows from the fact that the complexity of some impartial games is PSPACE complete (see \\ref{sec:psapce_nexptime} for more details).\n\nSo far algorithms for impartial games have used handcrafted programs that use ideas akin to the alpha-beta search but are specially designed for the mex operation \\cite{viennot2007further}. Recently \\cite{beling2020pruning} proposed a novel method that prunes the search tree according to the node values calculated by the mex function on short impartial games, like nim, chomp, and cram, but this approach does not in general scale to large board sizes. \n \nThese conventional programs that utilise that mex operation and the Sprague-Grundy values could be used as benchmarks to test whether new RL based algorithms can outperform conventional algorithms. \n\n\n\\subsection{Intrinsic complexity bottlenecks}\n\\label{sec:psapce_nexptime}\nComputational Complexity deals with fundamental limits and bounds of algorithmic tasks. In this section we review classical complexity classes, results and conjectures related the asymptotic complexity of games. \n\nThe time complexity of an algorithm is the amount of time it takes a computer to run it, and it is commonly estimated by counting how the algorithm performs many elementary operations. An algorithm is said to be of $T(n)$ time if its running time is upper bounded by $T(n)$ in the size n of the input for the algorithm. Since we are not interested in actual hardware and the complexity is commonly expressed using big O-notation. This way, the complexity of an algorithm becomes independent of the actual speed of the computer and is frequently measured as the number of steps needed by the algorithm. \nAn algorithm is said to be polynomial-time (belongs to $P$) if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, that is, $T(n) = O(n^k)$ for some positive constant k. \n\nAn algorithm is said to be exponential time (belong to EXPTIME) if $T(n)$ is upper bounded by $O(2^{nk})$ for some constant $k$.\n\nIt is essential to keep in mind that complexity classes like P and EXPTIME are asymptotic notations. Problems in P are often considered tractable, while problems that requires exponential time are considered intractable. \n\nA non-deterministic algorithm is an algorithm that it can make certain guesses at certain points during its computation. Such algorithms are designed so that if they make the right guesses at all the choice points, then they can solve the problem within the required time bound. We can think of a non-deterministic computation as and algorithm with access to a perfect AI module that at each choice point always would return the optimal choice.`\n\nNP is the class of decision problems that can be solved in polynomial time by a non-deterministic algorithm. So we can consider NP as the class of problems that can be solved in polynomial time when given access to a perfect (idealised) build-in AI that as each branch point always recommend the most economical choice. NEXPTIME denotes the class of problems that can be solved in exponential time by a non-deterministic algorithm. \n\nA complexity class might be based on space rather than time. PSPACE is the class of decision problems that can be solved by an algorithm using memory bounded by a polynomial expression in the size of the input.\n\nA decision problem $A$ is said to be complete for a set of decision problems $\\mathbf{B}$ if $A$ is a member of $\\mathbf{B}$ and every problem in $\\mathbf{B}$ can be reduced to $A$. Thus if the problem $A$ can be solved by an algorithm using certain computational resources, it would essentially make it possible to solve any problem in $\\mathbf{B}$ by use of the same computational resources. \n\nRuntime bounds apply to algorithms including AlphaZero style learning algorithms, that use neural networks. The computational time bounds apply to such algorithms, however here the training process needs to be considered as a part of the algorithm. Learning algorithms are typically probabilistic, and the criteria for success is typically not measured against 100\\% accuracy. \n\nIn practice we might want to disregard the training time and ask for the algorithms asymptotic run-time, given access to perfectly trained (or pre-trained) neural networks. In computational complexity theory, an advice string (advice function) is an extra input that is allowed to depend on the length $n$ of the input, but not on the input itself. A decision problem is in the complexity class $P\/f(n)$ if there is a polynomial time algorithm (Turing Machine) with the following property: for any $n$, there is an advice string A of length $f(n)$ such that, for any input $x$ of length $n$, the machine M correctly decides the problem on the input $x$, given $x$ and A. The class P\/poly consists of decision problems (classification problems) that can be solved in polynomial time, by use of polynomial advice functions. A (pre-trained) neural network can serve as advice function, as there is no requirement on the quality of the advice given. For the algorithm to be able to run in polynomial time, it need to be able to evaluate the NN in polynomial time, so all NNs used by the algorithm need to have polynomial size.\nThus, the class P\/poly include problems that can be solved in polynomial time using pre-trained (polynomial size) neural networks for free. \n\nChess, Go and shogi on $n \\times n$ boards have all been shown to be NEXPTIME hard. More specifically, the decision problem of determining if a given positions is a forced win (\\textit{i.e.}guarantees the player who moves from the position a win with optimal play) is NEXPTIME complete for chess, Go and shogi \\cite{fraenkel1981computing, robson1983complexity, adachi1987shogi}.\nThus Chess, Go or shogi on $n \\times n$ game positions cannot be correctly evaluated by any sub-exponential time algorithm. And thus particularly the required computational time needed for an algorithm to learn to play generalised chess (Go or shogi) to a level of perfection would be exponential. \nTherefore from a theoretical point of view there is no hope that AI algorithms in practise will be able to learn to perfectly master generalised versions of complex games like chess, Go or shogi on large boards. In fact, given the widely believed but unproven conjecture that NEXPTIME $\\not \\subseteq$ P\/poly it would be impossible for any polynomial time algorithm is solve a NEXPTIME complete decision problem (like generalised chess, Go or shogi) even when given access to polynomial size advice functions e.g. polynomial size pre-trained networks \\cite{vsima2003general}. But these theoretical results are asymptotic and does not say anything about the possibility for AI systems to learn to play the games with perfection on boards of standard size e.g. $8 \\times 8$ for chess, $19 \\times 19$ for Go, and $9 \\times 9$ for shogi. \n\nDetermining if positions in impartial games like geography and node kayles is a win is PSPACE complete \\cite{schaefer1978complexity}. So unless PSPACE $\\subseteq$ P\/poly or something to that effect, from a theoretical point of view, there is no hope that AI algorithms, in general, will be able to master (in polynomial time) impartial games perfectly even if allowed unlimited time for training \\cite{vsima2003general}. \n\n\\subsection{Low complexity bottlenecks: Parity and low-level thinking}\n\\label{sec:low_complexity}\n\nIssues related to the parity function has a long history in AI. In \\cite{mccarthy1964tough} McCarty suggested that the multilated chess board problem is a tough nut for proof procedures. The problem is given an $2n \\times 2n$ chessboard with two diagonally opposite squares missing. Show that this board cannot be covered with non-overlapping dominoes. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/MutilatedBoard.eps}\n\\caption{\\label{fig:MutilatedBoard} Consider an $8 \\times 8$ chessboard, where the top-right and bottom-left squares have been removed. Is it possible to tile this mutilated chessboard with $2 \\times 1$ dominoes? }\n\\end{figure}\n\nHumans, using high level thinking, typically manege to solve this problem - after some trial and error - by noticing that the number of white squares and black squares differs (in fact have different parity). \nMcCarthy conjectured that this problem is challenging when using low level non-abstract reasoning. It was only 35 years later that was formally proved by Danchev and Riis with regards to the so-call resolution proof system \\cite{dantchev2001planar}. \n\nThe issue underlying the mutilated chess board problem is directly related to the principles based on simple counting like the parity principle. Basic counting principles have played a prominent role in propositional proof complexity and more abstract formal systems that captures \"low complexity\" reasoning \\cite{riis1994independence,beame1998more}. \nFor humans, counting is a straightforward process. Even young children understand that the number of objects is an invariant, so recounting should (in principle) lead to the same number. In mathematics, this principle is referred to as the pigeon-hole principle. In \\cite{riis2001complexity} it was shown that combinatorial principles in a technical sense that can be formalised, either are easy and have polynomial size proofs (so-called tree-like resolution proofs), or are very hard (like the parity principle) and require exponential size proofs. The exponentially hard principles are exactly those principles that fails in infinite structures (all in a sense that can be made precise). The pigeon-hole principle fails for infinite sets as its possible to map bijectively an infinite set to a proper subset of itself. Thus according to the main results in \\cite{riis2001complexity} it follows that this principle require exponentially large (tree-like) resolution proofs. The intractability of the pigeon-hole principle (for the resolution proof system) had already been established in \\cite{haken1985intractability}. For an AI that is unable to grasp principles like the pigeon-hole principle (without human help), its seems hard to be able \"solve\" the mutilated chess board problem. Maybe, this was already part of McCarthy intuition when he originally posed his conjecture in 1964. \n\nBoolean circuits are very similar to neural networks, but can only handle Boolean 0\/1 values. In this setting the parity function is\ndefined as:\n\\begin{equation}\n f({x_1, \\ldots ,x_n}) = \\sum_{i=1}^n x_i \\ mod \\ 2 \n\\end{equation}\nwhere $x_1,x_2,...x_n \\in \\{0,1\\}$.\nIn \\cite{haastad1987computational} it was shown that the parity function cannot be computed by so-called constant depth, sub-exponential Boolean circuits. There is also a long list of theoretical work related to Boolean circuit complexity that support of the view that the parity function is hard to learn and correctly generalise to unseen data \\cite{linial1993constant}.\n\nMany types of neural networks can in principle compute the parity function. It is possible to handcraft and artificially tune weight, so the NN can compute the parity function (see section \\ref{sec:parity_function_nerual_network} for details). To better understand the issue, it is essential to keep a few facts in mind:\n\nIf we pick a random Boolean function $f$ uniformly from the set of $2^{2^n}$ Boolean functions, that function can in general not be learned. To illustrate this assume we are given an array of (distinct) inputs $\\boldsymbol{X} = (\\bar{x}_1,\\bar{x}_2, \\ldots ,\\bar{x}_s)$ with the corresponding list $\\boldsymbol{y}=(y_1,y_2, \\ldots , y_s)$ of function values where $\\bar{y_j}=f(\\bar{x_j}),\\quad j=1,2,\\ldots, s$. For any new unseen input $\\bar{x} \\not\\in \\boldsymbol{X}$ there is no relation between the already seen data $(\\boldsymbol{X},\\boldsymbol{y})$ and $f(\\bar{x})$. To see this, notice that there are exactly\n$2^{2^n-s}$ Boolean functions that fit the observations $(\\boldsymbol{X},\\boldsymbol{y})$, and exactly half \\textit{i.e.} $2^{2^n-s-1}$ have $f(\\bar{x})=0$ and while the remaining $2^{2^n-s-1}$ have $f(\\bar{x})=1$.\nThus if we are given data that agree with the parity function for $s$ inputs, there is precisely the same number of Boolean functions with the correct generalisation as there are with the wrong generalisation.\n\nWhy do we believe that the parity function is the correct way to generalise data $(\\boldsymbol{X},\\boldsymbol{y})$ that happens to fit the parity function? There is no doubt that a human investigating the data $(\\boldsymbol{X},\\boldsymbol{y})$ after some time will spot the pattern and then be in no doubt about how to generalise the data. That is essential because we naturally apply Occam's razor, and the phenomenon is also related to the concept of Kolmogorov complexity. We feel the parity function is a better guess as it has a short description. According to Kolmogorov's philosophy, it is a less \"random\" function as it has a short description. However, if the underlying prior probability distribution is uniform, though it might feel counter intuitive, there is no logical mathematical reason to apply Occam's razor, as no generalisation of the partial data matching those of the parity function is more likely than any other. \n\nHumans find it easy to generalise the parity pattern due to its elementary description. To some extent, neural networks apply Occam's razor automatically as they favour functions that can be computed by NNs with certain size and depth bounds. Given training data $(\\boldsymbol{X},\\boldsymbol{y})$, a new data point is not equally likely to have output 0 or 1. Humans find it easy to generalise the parity pattern due to its elementary description. However, most NN models do not take take this into account. Even if the NN happens to train a NN to compute all training data correctly (\\textit{e.g.} obtain 100\\% accuracy), the NN might still overfit and not generalise \"correctly\" to unseen data. \n\nSome researchers have argued that the parity function is a mathematical function rather than a natural occurring function, and they use this to explain why it is so hard to generalise. This kind of argument has sparked of some debate among AI researchers \\cite{thornton1996parity, damper1998parity}. \n\nAny Boolean function, including the parity function, can be computed by a NN with only two layers. This result requires exponentially large neural networks as the number $n$ of variables goes to infinity. Our arguments and experiments would break down if we allowed exponentially large NN and exponentially large training sets.\n\nOne heuristic way we have been thinking about parity, nim and impartial games in general, can be expressed as an informal non-rigious argument that have been guiding our intuition. It falls outside the scope of this paper, but it might be possible to make the argument more rigorous by for example replacing the notion of continuity with a suitable approximate notion. \n\n\\bigskip\n\\noindent\n{\\bf Informal argument:}\nIn mathematics, a continuous function has the property that if we keep the inputs in a small neighbourhood of a point, the output values will stay in a small neighbourhood of the original output. Neural Networks are, in essence, continuous (even for discrete classification problems) as the probabilities they return are stable, so small input changes only lead to small changes in the output. Back-propagation, the calculation of the derivative of weights with respect to the loss in a neural network, is essentially a differentiable (and thus continuous) process. Many learning tasks are continuous because small changes to the input lead to small changes in the output. Such learning tasks might be well suited for neural networks. Intuitively, Games like chess, Go, and shogi are mainly continuous, but there are typically some points of discontinuity where the correct evaluation of a position might jump with a slight change in input. The discontinuity is not a serious issue as the policy and value networks can still provide sufficient guidance to the MCTS. However, many impartial games - and certainly nim - are in some sense ubiquitously discontinuous as a slight change of the position might dramatically change the corresponding correct evaluation. \n\n\n\\subsection{Practical challenge: Parity and neural networks}\n\\label{sec:parity_function_nerual_network}\nThe nim board positions can be represented by a list of bits, being 1, 0 or -1 where 1 denotes the counters on the board, 0 denotes the counters that have been removed from the board, and -1 is a token separating the heaps. To accommodate nim board representation, we define a version of the parity function (sometimes called parity with noise) as the following. Let $n$ be any positive integer. Given input $\\chi = \\{0, 1, -1\\}^n = \\{x_1, \\ldots ,x_n\\}$, the function is defined as \\cite{banino2021pondernet}: \n\\begin{equation}\n f({x_1, \\ldots ,x_n}) = \\left(\\sum_{i=1}^n x_i \\; \\textbf{\\textit{if}} \\; x_i == 1 \\right) \\ mod \\ 2 \n\\end{equation}where $n$ is the length of the input. Thus, the function output is either 0 or 1, indicating whether it contains even or odd numbers of 1s.\n\nNeural networks are the pillars of modern artificial intelligence (AI). However, parity function has turned out to be difficult to learn and generalise for a variety of neural network models. \\cite{al2005neural} shows MLP and simple RNNs trained with gradient descent can learn to modelling training data, but fail to generalise on unseen data. \n\nThe majority of prior works focused on constructing specify neural networks with fixed weight parameters, mainly Multilayer Perception (MLP) or RNN, dedicated to solve the parity problem \\cite{hohil1999solving, liu2002n, franco2001generalization, wilamowski2003solving, franco2001generalization}. A RNN with 3 neurons and 12 frozen parameters can approximate perfectly a XOR function. These artificially hard-wired neural networks can generalize to all seen and unseen patterns without training or adaptation \\cite{al2005neural}. However, it is improbable to artificially setup the weight parameters for neural networks modelling the unknown data distributions.\n\nAs, to our best knowledge, no prior experiments systematically investigating and comparing the performance of different neural networks trained on the bitstring of varying length to model parity function are available in the literature, we performed our own experiments whose results also show that that neural networks, like simple Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM), are capable of modelling parity function \\textit{perfectly} on short bitstrings where the number of bits are less than 100, but it is intractable for them to learn the parity function when the length of the bitstrings is generally more than 100 (see section \\ref{sec:value_network}). \n\nRNN is an umbrella term that incorporates vanilla (simple) RNN, Bidirectional RNN (BRNN), Gated Recurrent Unit (GRU), LSTM and a wide range of their variations like the Memory-Augmented RNN (MRNN) that enhances RNNs' ability on handling on sequential data with long dependencies \\cite{zhao2020rnn}. In \\cite{zhao2020rnn} it was argued that RNN and LSTM are incapable of processing long time series data that requires persistent memorization, which is a disadvantage to process long bitstrings as flipping any single bit alters the parity and furthermore a tiny error in the memory might incur disastrous repercussion. This discovery aligns with our results that the longer bitstrings on which the neural networks are trained, the harder it is for them to learn to model the parity function.\n\nThe parity of bitstrings is permutation invariant by its nature, as changing the order of the bits does not affect the parity. RNNs, despite dependent on the order of the input, can be regularized towards permutation invariant. \\cite{cohen2020regularizing} shows that RNNs with regularization applied towards permutation invariance can simulate correct parity function for the bitstrings whose length is up to 100. However, our results (see section \\ref{sec:value_network}) demonstrate that RNN trained on bitstrings of length 20 can model parity function for the bitstring of any length, without applying any regularization. \n\nRNN architectures are in theory able to simulate any Turing Machine (TM) \\cite{al2005neural, siegelmann1995computational} given the suitable weights and biases, however the results depend on unrealistic assumptions such as unlimited computation time and infinite precision representation of states. In practice finding proper parameters thought gradient descent algorithms for RNNs is a demanding task due to the notoriously hard {\\it vanishing gradient problem} \\cite{hochreiter1997long}. With the number of bitstring increasing, the difficulty of the RNN modelling the parity function also escalates drastically.\n\nSelf-Attention network (\\textit{a.k.a} Transformer architectures) \\cite{vaswani2017attention} underpins many Natural Language Processing (NLP) applications since its emergence in 2017. However, \\cite{hahn2020theoretical} has shown strong theoretical limitations of the abilities of the Self-Attention network on evaluating logical formulas, which is equivalent to evaluating the parity of bitstrings, and draw an conclusion from asymptotic results that any transformers will make mistakes on modelling the parity when the input is sufficiently long. \n\nRecently, researchers from DeepMind considered PonderNet \\cite{banino2021pondernet} and showed its ability to modelling the parity function by adaptive computation where the computation needed is proportional to the complexity of the problem. In their experiments, the bitstrings contained 96 bits, of which a random number from 1 to 48 are set to 1 or -1s and the remaining bits are set to 0 for training, and of which a random number from 49 to 96 are set to 1 and the rest are set to 0 for evaluation. The PonderNet achieved almost perfect accuracy on this hard extrapolation task, but unlike RNNs that can process the input vectors of any length through unrolling, it can only be evaluated on the same length of the input vectors as that of the ones it was trained on. Their experiments were intended to demonstrate the computational adaptability of the PonderNet using the parity function as a testbed, rather exhibiting its power on modelling parity function. But it shows that as the bitstrings become increasingly complicated, \\textit{i.e.} more 1s or -1s in the bitstring, the computational resources required to learn the parity increases to the extend where when the bitstring is sufficiently long, the required resources are astronomical, if not unobtainable.\n\nModern computer vision models are capable of classifying images with super high resolutions \\cite{dosovitskiy2020image}. It is a common misbelief that they can also classify long bitstrings represented by image format as well. There is typically a robustness inherited in image classification, as the result of the classification does not vary by slight changes in the input image. However, flipping a bit completely changes the parity of a bitstring. Thus they are fundamentally different tasks. \n\n\\section{Different levels of mastery}\n\\label{sec:differnt}\n\nThe goal of an RL agent during training is to cultivate the ability of gaining maximum rewards in a Markov decision process (MDP) \\cite{silver2021reward}. The performance of an RL agent is commonly measured in terms of the averaged accumulated rewards it obtained over a number of episodes or the Elo rating score attached to it, measuring its competitiveness against other agents. To complement these score-based measurements, we propose and consider two fundamentally different ways to assess the extend to which a RL algorithm has learnt to master a game, as shown in table \\ref{tab:level_optimality}. \n\n\\begin{table}[h]\n\\caption{Two types of optimality} \n\\label{tab:level_optimality}\n\\centering \n\\begin{tabular}{l p{.70\\linewidth}} \n\\toprule \n\\multicolumn{1}{c}{Type of Optimality} & \\multicolumn{1}{c}{Description} \\\\ [0.5ex]\n\\midrule \nType 1 & To what extend have the algorithm learnt to make a sequence of good moves in actual game play that lead to winning the game when played from initial position\\\\ \nType 2 & To what extend have the algorithm learnt to make good moves in all possible game positions arising from play from the initial position \\\\\n\\bottomrule \n\\end{tabular}\n\\end{table}\n\n\n\\noindent\nThis leads to two notions of an optimal agent, namely champion and expert, which are distinguished by their level of mastery on the game, as shown in table \\ref{tab:level_mastery}. The general complexity results for chess, Go and shogi might only concern the type 2 notion of optimality.\n\n\\begin{table}[h]\n\\caption{Two Levels of Mastery} \n\\label{tab:level_mastery}\n\\centering \n\\begin{tabular}{l p{.70\\linewidth}} \n\\toprule \n\\multicolumn{1}{c}{Mastery Level} & \\multicolumn{1}{c}{Description} \\\\ [0.5ex]\n\\midrule \nChampion & A player who is able to always get optimal result against any opponent, when the game is started from the initial position\\\\ \nExpert & A player who is able to always play the optimal move in any position that can arise by legal play from the initial position \\\\\n\\bottomrule \n\\end{tabular}\n\\end{table}\n\nThis distinction is vital. The champion might have developed a skill to steer the game into its comfort zone where it masters the game. The expert agent always takes the best move on any positions. But in some games it is essentially impossible to become a champion without also becoming an expert. It is outside the scope of this paper to systematically prove this claim. Intuitively, this is because the winning side cannot control the game enough to steer the game into well know territory. A more rigorous and theoretical analysis of this issue is left open.\n\nIn chess, there is a special discipline called problem chess, where the task is to solve composed chess problems artificially created, rather than problems arising from actual competitive play. LCZero is not trained to solve specially artificial problems and it is not surprising that the program has difficulty dealing with them \\cite{maharaj2021chess}. AlphaZero's successes in chess, shogi and Go were its ability to learn to play and win games. The task was not to solve problem positions i.e. find good moves in artificial situations. Thus, AlphaZero learnability of a game was measured by a champion notion instead of applying an expert measure.\n\nThe champion and expert concepts will be discussed in conjunction with experimental results in Section \\ref{sec:reinforcement_learning}. We also use an analogy and a crafted nim board below to expound the distinction between them. \n\n\\medskip\n\\noindent\n{\\bf Example 1:} Imagine a fighting game where one of the agents can force the opponent to fight on either the savanna or in the jungle. A champion might be an expert in savanna fight, but pretty hopeless in the jungle (or visa versa). Regardless, the champion can win by compelling the combat into the savanna. To be an expert, the agent needs to master both savanna and jungle fights to win in any situation it could possibly encounter. \n\n\\medskip\n\\noindent\nHere is another simple (and rather extreme) example of the relevance of the two notions from the game of nim.\n\n\\medskip\n\\noindent\n{\\bf Example 2:} Let $n \\in N$ and consider the nim board $[2,1,1,\\ldots 1]$ with one pile with 2 counters, and $n$ piles with $1$ counter where $n$ is large number, for instance, 100.\nAn agent - even after relatively little training - become a champion in this game. \nThe agent very fast learns that if their are two counters in the first pile, it always have to remove either $1$ or $2$ counters from that pile. Thus with only relatively little self play training the agent becomes an optimal agent of type 1 \\textit{i.e.} a champion, that essentially learns and memorises the initial two first \"opening\" moves. But the task of selecting the best move in a general position $[2,v_2,v_3,\\ldots,v_n]$\nwith $v_j \\in \\{0,1\\}, j \\in \\{2,3,\\ldots, n\\}$ is equivalent to evaluating the parity of $[v_2,v_3,\\ldots,v_n]$. \nThus due to difficulty for the neural network\nto model the parity function, especially without any serious incentive to learn to do so, a champion agent for this nim board is not expected to become an expert agent. Our experiments show that for nim with a small number of heaps like $n=15$, an agent becomes champion after just 200 epochs of training. \n\nYet, despite fast becoming a champion it fails to become an expert even after extensive training with thousands of epochs. The experimental setup and configuration is shown in section \\ref{sec:implement_alphazero}. \n\n\n\\section{Preliminary experimental results}\n\\label{sec:prelimiary}\n\nAmong various types of neural networks we investigated and surveyed in the section \\ref{sec:background} including MLP, RNNs, Self-Attention, PonderNet, RNNs are the only ones that seem to possess the potential to model perfectly the parity function through learning by gradient descent and could process the bitstrings of any length ascribing to its unrolling mechanism. Thus, we designed a range of experiments aimed to investigate empirically if or to what extend RNNs could model the parity function from bitstrings and the nimsum function from nim positions that could possibly arise during actual game play. \n\n\\subsection{Model parity function using value network}\n\\label{sec:value_network}\nWe used LSTM as the main component of the value network and discovered that the single layer LSTM with 128 hidden size trained on the bitstrings of length 20 can simulate a parity function, indicating that there exists a set of parameters for this LSTM model that enables it to model the parity function. The architecture of the value network we used is shown below. The bitstrings are processed by a LSTM layer, followed by a linear layer whose output is then squeezed to the range between (0, 1) by a sigmoid function. \n\n\\begin{enumerate}[label={(\\arabic*)}, noitemsep]\n \\item Single LSTM layer with 128 nodes\n \\item Linear layer with 1 node\n \\item Sigmoid function\n\\end{enumerate}\n\nThe batch normalization layers \\cite{ioffe2015batch} are not used in the value network as in \\cite{silver2017mastering} because we found it adversely impacts the performance of the model. We use Adam \\cite{kingma2014adam} optimizer to apply the gradients to the weights of the neural networks and each gradient calculation consumes 128 bitstrings. All our experiments in this paper ran on the NVIDIA A100 GPU in High Performance Computer Cluster \\footnote{\\url{https:\/\/docs.hpc.qmul.ac.uk\/}}.\n\nWe conducted a series of experiments investigating the impact of the length of the bitstrings on the difficulty of value networks to find the right parameters to model the parity function through gradient descent. All the training and testing data were generated randomly as in \\cite{banino2021pondernet}. The value networks were evaluated on the bitstrings with 10 more bits than these it was trained on to test its ability on extrapolation tasks and to ensure that the test data do not overlap with data it has seen in the training. The results are shown in the Fig. \\ref{fig:value_network}. The left graph shows the performance of the model trained on bitstrings whose length is $h\\in\\{20, 40, 60, 80, 120\\}$. It is obvious to see that the difficulty in modelling the parity function rises as the length of the bitstrings grows. \n\nWe also found that the model trained on longer bitstring is more sensitive to the changes in hyperparamters, like learning rate. As shown in the right graph in Fig. \\ref{fig:value_network}, the value network trained on the bitstrings of length 20 is immune to varying learning rates, evidenced by the three overlapping lines each of which shows the prediction accuracy on the bitstrings of length 20. The value network trained on bitstrings of length 40 is not as impervious to the changes of the hyper-parameters as these of length 20. The model trained on bitstrings of length 80 requires significantly higher number of training steps to converge to the parity function, and the one trained using the learning rate of 0.0003 and 0.0005 failed at making prediction better than random guess.\n\nIt is necessary to note that the amount of data used in the training and the way they are sampled impinge on the convergence of the model, but the impacts of these stochasticities and factors are of no significance due to the fact that we did myriads of experiments with a range of configurations, \\textit{i.e.} various combinations of different number of LSTM layers and learning rates, using thousands of GPU hours, out of which the statistically stable properties emerged. The results shown in these graphs are some of the exemplary among them. We did not discover any apparent patterns pertaining to the effects of the size of the training dataset or the impact of learning rates on the convergence of the model. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/bistrings_acc.eps}\n\\hspace{1ex}\n\\includegraphics[width=0.45\\textwidth]{figures\/lr_acc.eps}\n\\caption{\\label{fig:value_network} The accuracy of the value network on extrapolation task as the training progresses. The model was training by 1 million steps in every experiment, each step consuming 128 bitstrings. The training and testing time each run takes ranged from 90 to 165 minutes for the bitstrings whose length ranges from 20 to 120. }\n\\end{figure}\n\nAs observed from these steep learning curves, the polarity of the performance of value networks during the training process might indicate that they are not learning incrementally to simulate the parity function, but in a way likening epiphany. During training, the neural networks that end up converging to the parity function experienced two states and the transition is transient, being either cannot model the parity function at all or can model the parity function perfectly, resembling how we human learn the parity function. We suggest that the neural networks before generalization were in the incubation period in which they were ruling out the incorrect sets of parameters and were steering the update direction towards uncovering the right one that enables it to model the parity function. A recent study on generalization of neural networks over small algorithmic datasets discovered a similar phenomenon and they dubbed it as \"grokking\" where the generalization performance of the neural networks suddenly shoots from random guessing to perfect generalization, which could also occur long after the presence of overfitting \\cite{power2022grokking}. We have made some remarks on this phenomenon in section \\ref{sec:conclusion}.\n\n\\subsection{Learn winning moves using policy networks}\n\\label{sec:policy_network}\n\nAlphaGo, the precursor of AlphaZero, employed a mixed training strategy where the policy networks were trained in supervised learning from 30 million board positions obtained from the KGS Go Server, preceding further training by reinforcement learning (RL) \\cite{silver2016mastering}. The policy network was trained on randomly sampled state-action pairs $(s, a)$ where $s$ is a board state and $a$ is an action that the experts took on that board position. \n\nAn agent might be excellent in evaluating positions that naturally occur in actual game-play, but might be poor at evaluating \"artificial\" positions that cannot arise in practice. One reason could be that the agent follows a policy that prevents the position to occur. Although applying temperature setting and the Dirichlet noise to boost the exploration of all the possible legal moves, some moves might still get lower chance to be tried \\cite{silver2017mastering}. Thus the role of the policy network in nim becomes important for our analysis. When AlphaZero after being trained plays chess (Go or shogi) it typically garners a collection of sample trajectories and returns heuristics of the search guided by policy and value networks. If the search tree fails to branch enough, resulting in that the critical moves are not explored,(see example \\ref{fig:LC3}) in which case we cannot expect the quality of the search to be reliable. To illustrate the issue, consider the following example in which the nim positions can be categorized into 4 classes. \n\n\\medskip\n\\noindent\n{\\bf Example 3:} Let $n \\in N$ be a large number and consider nim positions $[v_1,v_2,v_3,\\ldots v_n]$ with $n$ rows, each containing either 0,1 or 2 items.\n\\noindent\nSuch positions fall into the following 4 different categories, as shown in the Table \\ref{tab:winning_move}.\n\n\\begin{table}[h]\n\\caption{Winning Moves on nim Board Positions} \n\\label{tab:winning_move}\n\\centering \n\\begin{tabular}{ll} \n\\toprule \nWinning Move & Nim Board Positions \\\\ [0.5ex]\n\\midrule \nNot exist & even number of heaps with 1, and even number of heaps with 2\\\\ \nremove 2 from heap with 2 & even number of heaps with 1, and odd number of heaps with 2 \\\\\nremove 1 from heap with 1 & odd number of heaps with 1, and even number of heaps with 2 \\\\\nremove 1 from head with 2 & odd number of heaps with 1, and odd number of heaps with 2 \\\\ \n\\bottomrule \n\\end{tabular}\n\\end{table}\n\nThe number of search paths through the game tree from such a initial position is given by $\\prod_{j=1}^{n} v_j$, which for large values of $n$ becomes compatibly unfeasible to traverse every path. However, after training the policy network might have learned to reduce the branching factor, so it mainly selects one move that has a better chance leading to a winning position out of the 3 move types at given steps. \n\nIn order to investigate if the policy network could output the probability distribution that is in favor of the winning move on the boards shown in the example, we considered a supervised classification problem of 3 classes corresponding to the three classes of nim positions with winning moves available in the example. The input to the policy network is a won position $[v_1,v_2,\u2026v_n]$ \\textit{i.e.} the position is of type 2, 3 or 4 as described in the above example. Type 1 board positions are not being taken into consideration as they do not have winning moves. The output would be the type of the winning move derived from the nim-sum (which is equivalently one of the type 2, 3 or 4) labelled as 0, 1 or 2. \nThe training and testing data are randomly generated nim positions that could possibly arise during actual game play. The architecture of the policy networks consists of one or multiple LSTM layer(s), a batch normalization layer, a ReLU function, a linear layer with 3 nodes corresponding to the 3 legal moves, and a softmax function, as shown below. \n\n\\begin{enumerate}[label={(\\arabic*)}, noitemsep]\n \\item LSTM layer(s), each of which contains 128 nodes\n \\item Batch normalization\n \\item A rectifier nonlinearity\n \\item Linear layer with 3 nodes\n \\item Softmax function\n\\end{enumerate}\n\nOur experiments tested the board positions of the nim with different number of heaps ($h$), ranging from 7 to 9 and use the policy network with different number layers ($l$) from 1 to 5, and up to 10 for 9 heaps nim. The training and testing datasets are evenly balanced across three labels. The results are shown in the Fig. \\ref{fig:policy_network} in which the top 3 plots are the results on training data and the bottom 3 plots on testing data. \n\nWe discovered that in none of the experiments do the policy networks predict the correct winning moves with more than 80 percent accuracy, showing that the policy networks fail to model the nim-sum accurately. The performance of policy networks trained on board positions from 7 heaps nim is consistent, while the testing accuracy of the policy networks trained on these from 8 heaps is more volatile. The policy network trained on board position of 9 heaps nim cannot predict the winning moves better than a random policy, showing the difficulty for the policy to model the nim-sum also grows as the length of the bitstrings increases, which is similar to our findings on value network. \n\nWe did extensive experiments using different number of LSTM layers in attempts to find a architecture that is capable of modelling the nim-sum, like the LSTM model that can model perfect parity function in section \\ref{sec:value_network}, but of no avail. Thus, we present the results on the policy networks consisting of varying layers to illustrate this discovery and to show that it occurs among various network architectures. \n\n\\begin{figure}[H]\n \\centering\n \\begin{subfigure}[b]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/policy7_train_acc.eps}\n \\caption{$h=7$ heaps}\n \\end{subfigure}\n \\hspace{0em}\n \\begin{subfigure}[b]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/policy8_train_acc.eps}\n \\caption{$h=8$ heaps}\n \\end{subfigure}\n \\hspace{0em}\n \\begin{subfigure}[b]{0.32\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/policy9_train_acc.eps}\n \\caption{$h=9$ heaps}\n \\end{subfigure}\n\\end{figure}\n\\vspace{-1.5em}\n\\begin{figure}[H]\n\\centering\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy7_test_acc.eps}\n\\caption{$h=7$ heaps}\n\\end{subfigure}\n\\hspace{0em}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy8_test_acc.eps}\n\\caption{$h=8$ heaps}\n\\end{subfigure}\n\\hspace{0em}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy9_test_acc.eps}\n\\caption{$h=9$ heaps}\n\\end{subfigure}\n\\caption{\\label{fig:policy_network} The training and testing performance of the policy network on $h\\in\\{7, 8, 9\\}$ heaps of nim. $h$ denotes the number of heaps in the game and $l$ the number of LSTM layers in the policy network. The top 3 plots are the model accuracy on the training data, and the bottom 3 plots the accuracy on the testing data. Every policy network was trained one hundred thousand steps with each step consuming 128 bitstrings. Each run finished within 30 minutes.}\n\\end{figure}\n\n\nThe size of the dataset and how the data are generated affect the performance of the value network, so do they on that of the policy network. It is possible that using a larger dataset would help the policy network to learn the nimsum function. However, the state space (\\textit{i.e.} the size of the dataset) in our experiments was relatively small, and positions were visited many times during training iterations. One crucial point is to understand that learning nimsum or parity function - even on smaller boards - is non-monotone. Thus it is not the case that the policy network gradually increases its knowledge and steadily improves. Instead, the non-convergence typically manifests itself in random fluctuations where the performance of the policy network fails to improve on certain positions, as shown in the learning curves. \n\n\\section{Reinforcement Learning for nim}\n\\label{sec:reinforcement_learning}\nThere exist some open-source implementations of AlphaZero on the GitHub, but due to the fact that we need some additional functionalities that are not provided in any of them, for instance calculating the Elo rating of the agent being trained against its ancestors, evaluating the accuracy of output of the policy network as the training progresses against the winning move derived from the nim-sum formula, etc., we implemented the AlphaZero algorithm by our own in PyTorch \\cite{paszke2019pytorch} for neural network training and Ray \\cite{moritz2018ray} for running simulations in parallel. We made considerable efforts on ensuring our implementation is in close proximity to the algorithms at granular details as described in \\cite{silver2017mastering, silver2018general}, while necessary changes to neural network architectures tailored to the nim were made, which will be specified in the section \\ref{sec:implement_alphazero}. \n\n\\subsection{Implementing an AlphaZero style algorithm for nim} \n\\label{sec:implement_alphazero}\nThe AlphaZero algorithm starts training from a clean slate with no specific game knowledge, besides knowing the rules of the games \\cite{ silver2018general}. Policy network, value network and MCTS are three pillars of the AlphaZero algorithm. The policy network outputs a probability distribution over all the actions $\\mathcal{A}$. $P(s, a)$ stands for the prior probability associated with action $a \\in \\mathcal{A}$ at state $s$. The probabilities of the illegal actions are set to zero and the remaining probabilities are re-normalized so that the summation of all the probabilities remains one. The policy network narrows down the search to the actions with high probability of leading to a win position, hence reducing the breath of the search tree. The value network outputs a scalar value $v \\in [-1, 1]$ estimating the expected outcome from position $s$ if following the actions suggested by the policy network. Higher value of $v$ indicates the current player who is taking move at position $s$ has a higher change to win, and vice versa. The value of the leaf node is predicted by the value network, without which it can only be known at the end of the game where it is 1 when the current play won the game or -1 when lost, whereby effectively reducing the depth the search tree. \n\nThe policy and value networks share the LSTM layer, but use two separate heads, policy head and the value head. We did the experiments on nim of 5, 6 and 7 heaps and tweaked the network architectures and configuration to adapt to different board sizes. The shared layers of the policy and value network are one LSTM layer with hidden size of 128 for all three nim games. It is natural to increase the number of layers as the board size grow, however we found that larger model is detrimental to the performance and tends to destabilize the training process. The output of the LSTM layer is then passed into two heads. The policy head consists of 25, 36 and 48 nodes for 5, 6 and 7 heaps nim respectively, the output of policy head goes into a softmax function converting these logits into probabilities that sum to 1. The policy head contains a single node that outputs a scalar value, which goes into a tanh activation function squeezing it to the range of [-1, 1]. \n\nThe MCTS starts with the root node of the search tree corresponding to the current state the player is taking action on. Each node represents a state encountered in the game play and each edge an action. The tree is constructed in the course of running a predefined number of simulations, each of which starts with the root node and ends with a leaf node, following a sequence of actions selected using the Formula \\ref{eqn:action_selection} \\footnote{See the python scripts in the reinforcement learning folder in our GitHub repository for the implementations of all the formula used in this section.}. We ran a predefined simulations for each move, and during training we collect 100 episodes of interaction data in the form of $(s, \\pi , r)$ to train the policy network with cross entropy loss and to train the value network with mean square error. \n\\begin{equation}\n a_t = \\argmax_a(Q(s,a)+U(s,a))\n \\label{eqn:action_selection}\n\\end{equation}\n\nwhere $Q(s,a)$ is the averaged action value across simulations calculated by\n\n\\begin{equation}\n\\label{eqn:q_value}\n Q(s, a) = \\frac{1}{N(s, a)}\\sum_{s'|s, a\\rightarrow s'}V(s')\n\\end{equation}\n\nin which $N(s, a)$ is a counter that records the number of times action $a$ has been taken from state $s$, $s'|s, a\\rightarrow s'$ denotes that action $a$ is taken at state $s$ and this simulation terminates at state $s'$, and $V(s')$ represents the value of the end state of a simulation for the perspective of the current player, obtained from either the value network if $s'$ is an intermediate state or the game itself as a reward if it is a terminal state. The $U(s,a)$ is calculated using\n\n\\begin{equation}\n\\label{eqn:puct}\n U(s,a)=c_{put}P(s,a)\\dfrac{\\sqrt{\\sum_b{N(s,b)}}}{1 + N(s,a)}\n\\end{equation}\n\nwhere the $c_{put}$ is a constant controlling the level of exploration. AlphaGo \\cite{silver2016mastering}, AlphaGo Zero \\cite{silver2017mastering} and AlphaZero \\cite{silver2018general} all leave $c_{put}$ unspecified. We found that $c_{put}$ value affects the performance of AlphaZero on nim significantly because setting it too low discourages the exploration, while setting it too high weights down the action value, impairing the effectiveness of the search depth. \\cite{tian2019elf} found setting $c_{put}=1.5$ yields satisfactory results, but this value along with other sensible ones like $c_{put}=\\{1, 1.5, 2, 3\\}$ works poorly for nim. We thus adopted another formula \\cite{schrittwieser2020mastering} to calculate the $U(s,a)$, as shown below.\n\n\\begin{equation}\n\\label{eqn:u(s,a)}\n U(s,a)=P(s,a)\\cdot\\dfrac{\\sqrt{\\sum_b{N(s,b)}}}{1 + N(s,a)}\\left(c_1 + \\log\\left(\\dfrac{\\sum_b{N(s,b)} + c_2 + 1}{c_2}\\right)\\right)\n\\end{equation}\n\nwhere $c_1=0.25$ and $c_2=19652$. To further encourage the exploration, Dirichlet noise was added to the prior probability $P(s,a)$ of the root node where the search begins. The Dirichlet noise is indispensable as it ensures that the search tree is widely branched, thus avoiding always visiting the moves with high prior probability. \n\n\\begin{equation}\n P(s,a) \\leftarrow (1-\\epsilon) \\cdot P(s,a) + \\epsilon \\cdot \\eta_{a}\n \\label{eqn:add_noise}\n\\end{equation}\n\nwhere $\\eta_a$ is sampled from the Dirichlet distribution Dir($\\alpha$) in which $\\alpha$ is set to 0.35. $\\epsilon$ is constant set to 0.25 during training. But it is set to 0 during the evaluation to negate the effect of the Dirichlet noise. The values of $\\alpha$ used for chess, shogi and Go are 0.3, 0.15, 0.003 respectively. The alpha value should be in inverse proportion to the approximate number of legal moves at given positions, as the average number of legal moves in the nim we run experiments on are less than that of chess, we opted for higher $\\alpha$ value 0.35. Although in theory $\\alpha$ should be set to 0.5, but in practice setting $\\alpha$ to 0.35 yields better outcome. The left arrow denotes that the prior probability is reassigned to the value on the right. \n\n\\begin{equation}\n \\label{eqn:u_s_a}\n U(s,a) \\propto \\dfrac{P(s,a)}{1 + N(s,a)}\n\\end{equation}\n\nAs shown in the equation \\ref{eqn:u_s_a}, the $U(s,a)$ is proportional to the prior probability $P(s,a)$. The visit count $N(s,a)$ in the denominator relatively enlarges the prior probability on nodes being visited with less frequency in order to boost exploration. At each state $s$, the action selection is jointly determined by the action value $Q(s,a)$, the visit count $N(s, a)$ and the prior probability $P(s,a)$ obtained from the policy network. The action with lower visit count, higher prior probability and higher value has better chance to be chosen. Thus, when the policy network fails to assign higher probability to winning moves, the search is directed to the nodes with less chance of leading to a winning state, making the search less effective. The problem also exists for the value network that fails to estimate the correct expected outcome of the game, resulting in prematurely cutting off the depth of the search that results in winning state.\n\nAfter the simulations are finished, the search returns a probability distribution over all the actions according to the formula \\ref{eqn:pi_action_selection}, from which an action is sampled to take at board state $s$. \n\\begin{equation}\n \\boldsymbol\\pi(\\textbf{a}|s) = \\dfrac{N(s,\\textbf{a})^{1\/\\tau}}{\\sum_{b}N(s,b)^{1\/\\tau}}\n \\label{eqn:pi_action_selection}\n\\end{equation}\n\nwhere $\\tau$ is the temperature that changes according to the number of moves that have been made during game play. We call the $\\boldsymbol\\pi(\\textbf{a}|s)$ posterior probabilities which are related to the prior probabilities as they are derived from MCTS simulation partially guided by prior probability from the policy network. For the first 3 moves, we set the temperature to $\\tau=1$ so that the chance of each action being sampled is proportional to its visit count. For the rest moves of the game, the temperature is set to $\\tau=0$, making the action being taken is the one with the highest visit count. Note that during the evaluation, the temperature is set to $\\tau=0$ for all the moves.\n\nThe policy network serves as the lighthouse that guides the search to moves with higher chances of winning the game. If it fails to work as intended, the search is misguided, immensely impacting the effectiveness of the search to the extent where when the search space is sufficiently large, the MCTS is equivalent to or worse than brute force search if the policy is skewed towards lost moves. In addition, the improvement of the policy network relies completely on the heuristics from the search. Thus, if the policy network is weak, a vicious cycle is formed where the poor policy misleads the search and the ineffective search leads to poor improvement of the policy, as will be shown in section \\ref{subsec:policy_value}. \\cite{danihelka2021policy} came up with a policy improvement algorithm in Gumbel AlphaZero which ensures the heuristic improves the policy network consistently. However, in that approach the target of the policy improvement entails the approximation from the value network (see the Section 4: Learning an Improved Policy in \\cite{danihelka2021policy} for the details), and due to the parity-related problems that incapacitate the policy and value network, they still might not be able to handle nim with large board size.\n\nSo far we have argued that various parts of the Alphazero algorithm (\\textit{e.g.} policy network and value network) have problems learning what would be required for overall success to master nim. In the next section we present experiment design and results from the AlphaZero algorithm on nim of varying board sizes, and the method to evaluate its performance. \n\n\\subsection{Experimental setup and results}\n\\label{sec:results}\nIn this section, the results along with the detailed configurations of the experiments are presented. We discovered that the AlphaZero algorithm applied on nim is sensitive to the selection of the configurations and our selection of the configurations might not guarantee the best results as there are many factor affecting the training process as shown in the last section and it is intractable to try out all the options in the search space, but the configurations presented here are the ones that gave us the best attainable results in the experiments we conducted. The policy and value network share one LSTM layer and have separate heads. They were trained simultaneously on the experiences (rollout data) collected during the self-play. Besides the number of nodes in the heads are different to adapt to different action spaces, the AlphaZero algorithm for the nim with three different number of heaps uses the same architecture. We choose large number of simulations at each move and increase it as the number of heaps in the nim grow, not only because more simulations lead to better heuristic, but they offset the effect of varying hyperparameters to which the algorithm is sensitive. During both training and evaluation, each move ran $s=\\{50, 60, 100\\}$ number of simulations on nim of $h=\\{5, 6, 7\\}$ heaps respectively. The simulations ran on 8 CPUs on parallel. The significantly large number of simulation on 7 heaps nim is used because we intent to eliminate the possibly negative impact on the performance of the algorithm brought by insufficient number of simulations although this incurs hefty computational cost. Each heaps contains odd number of items. For instance, the initial board of 5 heaps of nim, as shown in Fig. \\ref{fig:nim_board}, consists of $[1, 3, 5, 7, 9]$. Every heap in the actual board is represented by unitary numbers and each of them is separated by -1:\n\n\\[ [1, -1, 1, 1, 1, -1, 1, 1, 1, 1, 1, -1, 1, 1, 1, 1, 1, 1, 1, -1, 1, ..., 1, 1, 1]\\]\n\nEvery board position of 5 heaps nim is composed of bitstring of length 29. Similarly, these of 6 and 7 heaps are composed of bitstrings of length 41 and 55. The state spaces of 5, 6, 7 heaps nim are 3840, 46080 and 645120 and the action space of them are 25, 36, 49, respectively. In average, the number of moves, if taken randomly by two players, for the 5 heaps nim is 10, for 6 heaps 13 and for 7 heaps 16.\n\n\\subsubsection{Elo rating measurement}\n\\label{subsec:elo_rating}\nElo rating, commonly used in chess, is an approach of ranking the players for multiplayer games in terms of their competitiveness. AlphaZero adopted the Elo rating score to evaluate its performance against other algorithms or programs like AlphaGo Zero and AlphaGo Lee and Stockfish. Unfortunately, no existing nim agent with attached Elo rating reflecting its competitiveness is available. Thus, we opted for a variation of self-play Elo rating \\cite{tian2019elf} as an approach to measure the relative strength of an agent and monitor the training progress, in which the relative strength of an agent is evaluated in comparison with all its ancestors whose rating is updated every time a new trained agent joins in to ensure that the attached rating reflects their competitiveness. The Elo rating is measure of champion in terms of its ability to defeat other players. \n\nIn our self-play Elo rating system, every new agent is assigned with a initial score, 1000. At the end of each training iteration, the trained agent is saved into a reservoir of the agents that have been trained preceding it, and its Elo rating is calculated against all the trained agents hoarded in the system. The agent being trained is denoted as Player {A} and its opponent who is one of its predecessors as Player {B}. Both of them have an expected score representing their probability of winning the match, calculated by this formula for Player {A}:\n\\begin{equation}\n E_A = \\dfrac{1}{1 + 10^{(R_B-R_A)\/400}}\n\\end{equation}\nAnalogously, the expected score for player {B} is calculated by\n\\begin{equation}\n E_B = \\dfrac{1}{1 + 10^{(R_A-R_B)\/400}}\n\\end{equation}\nThere is only one of two possible outcomes of the match for player {A}, being either won or lost. For nim, the drawing situation does not exist. If player {A} won the game, its Elo rating is updated by\n\\begin{equation}\n R_A = R_A + K(1 - E_A)\n\\end{equation}\nwhere K is called K-factor. The K value is usually set to 16 for masters (strong players) and 32 for novices (weak players). In our setting, the K value for the players who have engaged in the tournaments beyond 20 time is set to 32, and otherwise 16 due to the consideration that the more times a player engages in matches, the more accurately the Elo ratings reflect its strength and hence the higher K value should be. The updated Elo rating for player {B} is\n\\begin{equation}\n R_B = R_B + K(0 - E_B)\n\\end{equation}\n\nThis approach is self-contained, not relying on any external program. The limitation of this method is that the Elo rating is contained and only measures the performance of the agent against its predecessors, meaning that it should not be compared with the rating of the agents outside the group. However, the ratings can be used as an indicator of the performance of the agents and a baseline for the future research as well.\n\nWe monitored the self-play Elo rating of the AlphaZero agent on nim of 5, 6 and 7 heaps. As shown in the Fig. \\ref{fig:elo_rating}, the self-play Elo rating of the agent being trained grows as the training progresses, indicating it is being more competitive. The AlphaZero agent for 5 heaps nim grew rapidly since the commence of the training. In comparison, the growth of the agent on nim of 6 is relatively slow and that of the nim of 7 heaps is stagnant after 420 iterations, showing that while the agent is being more competitiveness and on the path towards becoming a champion, there seems like a ceiling of its competitiveness that is hard to crack. This bottleneck, as shown in the next section, is caused by inability of the policy and value networks on nim.\n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/elo_rating.eps}\n\\caption{\\label{fig:elo_rating} The self-play Elo rating score of the agent being trained on nim of 5, 6 and 7 heaps respectively, calculated at the end of every training epoch against all the agents archived in the pool. Calculating the Elo rating during training takes huge amount of time. Our program ran roughly 200 hours with the above-mentioned configurations to obtain the results for the 7 heaps nim.}\n\\end{figure}\n\n\\subsubsection{Performance of policy and value network}\n\\label{subsec:policy_value}\nTo examine if the agent is capable of being an expert agent on the nim, we devised two accuracy measurements on the policy and value network. The action probability distribution yielded by the policy network should bias towards the moves that have higher chance leading to winning the game. In nim, the winning moves can be calculated by nim-sum, according to which the accuracy of the policy network is evaluated by comparing the accuracy of the most probable moves against the winning moves. The same policy measurement was also used in \\cite{danihelka2021policy} as Policy Top 1 Accuracy where the top 1 refers to the move with the highest probability. \n\nThe AlphaZero policy is measured against a random policy. As shown in the Fig. \\ref{fig:alphazero_policy_accuracy}, the AlphaZero policy surpasses the random policy by a significant margin on the 5 heaps nim, but the advantage diminishes on larger board size. The Alphazero policy on 7 heaps nim is tantamount to the random policy, which is due to the fact that the inaccurate policy results in poor heuristic which in turn leads to poor policy improvement. It is undoubtedly true that as more heap is added to the game, the growing action space is one important factor that complicates learning the winning moves for the policy network. However, recall from section \\ref{sec:policy_network} that even when the size of the action space remains unchanged, the policy networks still face increasing difficulty as more heaps are added. \n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy_accuracy_5.eps}\n\\caption{$h=5$ heaps}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy_accuracy_6.eps}\n\\caption{$h=6$ heaps}\n\\end{subfigure}\n\\begin{subfigure}[b]{0.32\\textwidth}\n\\centering\n\\includegraphics[width=\\textwidth]{figures\/policy_accuracy_7.eps}\n\\caption{$h=7$ heaps}\n\\end{subfigure}\n\\caption{\\label{fig:alphazero_policy_accuracy} The accuracy of the policy network measured against the winning moves on nim of 5, 6 and 7 heaps. The AlphaZero policy is more superior than the random policy on smaller board, but as the board size grows the accuracy of the policy drops drastically to the extent where it is equivalent to a random policy on 7 heaps nim.}\n\\end{figure}\n\nThe value network yields the estimated outcome of the game at given positions when following the move suggested by the policy network. In the nim, the won positions are the ones where the winning move exists. According to this property, we monitor the accuracy of the value network. All the possible board positions that could arise from the initial board position of 5 heaps nim are evaluated, but due to the large state space of 6 and 7 heaps, the evaluation is conducted on 10000 randomly sampled board positions. The prediction is considered to be correct if the value network outputs a positive number on won position and a negative number on lost position. The accuracy of the value network is shown in the Fig. \\ref{fig:alpha_value_accuracy}. As the number of heaps increases, so does the state space and the board size, the value network was facing rising hindrance in precisely evaluating of the board positions. The value network on 7 heaps nim barely outperforms random guessing. This result aligns with that of the experiment shown in section \\ref{sec:value_network}. \n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figures\/value_accuracy.eps}\n\\caption{\\label{fig:alpha_value_accuracy} The accuracy of the value network on nim of 5, 6 and 7 heaps. The accuracy on the board positions of 5 heaps nim rises constantly and reaches 90 percent at 500 training iteration. The accuracy on the board positions of 6 heaps nim exceeds 60 percent, but that on the ones of the 7 heaps nim fluctuates near 50 percent.}\n\\end{figure}\n\nThe policy network learns from the heuristics $ \\boldsymbol\\pi(\\textbf{a}|s)$ derived from the MCTS, as shown in Formula \\ref{eqn:pi_action_selection}. The value network learns from the actual game output. To probe how policy and value network fits in the targets, we keep track of the training loss for each of them during the training process, and as shown in the Fig. \\ref{fig:alpha_value}. It is salient that both value and policy network are able to gradually fit in the targets, empowering the agent to be increasingly competitive. However, the difficulties for the policy network to digest the heuristic and for the value network to model the expected outcome of the game grow as the board size increases, which is a major problem that impedes the agent to become an expert. \n\n\\begin{figure}[H]\n\\centering\n\\includegraphics[width=0.4\\textwidth]{figures\/policy_loss.eps}\n\\hspace{1em}\n\\includegraphics[width=0.4\\textwidth]{figures\/value_loss.eps}\n\\caption{\\label{fig:alpha_value} The loss of the policy network (left) and the loss of the value network (right) on nim of 5, 6 and 7 heaps. It occurs to both neural networks that the larger the board size grows, the harder it becomes for them to fit in the heuristic from the MCTS.}\n\\end{figure}\n\nThe gradually dropping loss of both policy network and network network, coupled with the rising Elo rating of the agents, indicates the agent is learning to become an champion. However, the dropping accuracy of the policy and value network as the size of the board grows shows that it is tremendously challenging for the agent to become an expert. On 7 heaps nim, the policy and value network could memorize the heuristics from the MCTS and actual outcome at the end of the game. But they merely enable the agent to become more competitive in comparison with its ancestors, and cannot guide the MCTS effectively to form a positive improvement loop. This illustrate our general finding that on large boards nim, and impartial games in general is a challenge for reinforcement learning algorithms. \n\n\\subsubsection{Analysis of AlphaZero on nim positions}\n\\label{subsec:alphazero_nim_positions}\n\n\\begin{figure}[H]\n \\centering\n \\subfloat[\\label{pos:5heaps_1}{5 heaps: [1, 3, 5, 7, 9]}]{\\includegraphics[width=0.35\\textwidth]{figures\/5piles_nim_position.eps}}\n \\centering\n \\hspace{1.5em}\n \\subfloat[\\label{tab:heaps5_1}Evaluations from policy and value network for the 2 moves with the highest prior probabilities. In 10,000 MCTS simulations, these are the only two moves selected on this position.]{\n \\begin{tabular}{p{0.25\\textwidth}p{0.08\\textwidth}p{0.05\\textwidth}}\\hline\n \\toprule\n \\textbf{Move} & e9 & a1 \\\\ \n \\midrule\n \\textbf{Winning Move} & yes & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 97.9\\% & 1.9\\% \\\\\n \\midrule\n \\textbf{Win Probability} & 99.5\\% & 5.0\\% \\\\\n \\midrule\n \\textbf{V-value} & 0.97 & -0.89 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\caption{\\label{fig:alphazero_5piles_1} The policy and value network accurately evaluate the initial position of the 5 heap nim, assigning 97.9\\% prior probability and 99.5\\% winning probability to the winning move e9. The move with the second highest prior probability is a1 that is assigned with 5.0\\% winning probability.}\n\\end{figure}\n\nThe graphs and analysis in the previous sections provide a high level overview of the performance of the algorithm on nim with different number of heaps. In this section, we will evaluate the performance of the algorithm on the initial board position along one of the intermediate positions from 5, 6 and 7 heaps nim, like we did in section \\ref{sec:revisiting_alphazero} where the statistics of the algorithms on some chess positions by LC0 are obtained and analysed. \n\nOn the positions of 5 heaps nim, the policy and value network after training strongly favors a few moves over the remaining ones, significantly improving the effectiveness of the search. However, this overly confidence could bring catastrophic consequence that cannot be restored if it is not well grounded. On a position in 6 heaps nim where the policy network assigned the top 4 prior probabilities to 4 losing moves, as it is not obsessed with any particular move and with the assistance with the value network that evaluates the majority of the next positions correctly, the algorithm converges to a winning move. On the initial position of a 7 heaps nim, the policy and value network fails in providing any constructive guidance that benefits the search. \n\n\n\nThe analysis on the initial position of a 5 heaps nim from the results of the trained model is shown in the Fig. \\ref{fig:alphazero_5piles_1}. All the values are calculated from the perspective of the player who is making the move on these positions. Each move is represented by a letter and a digit where the letter is the label to the heap and the digit denotes the number of counters the move removes from this heap. For instance, the move e9 is removing 9 counters from the heap labelled as e. The policy network assign 97.9\\% prior probability to the winning move, e9 and the value network accurately estimate the resulting position of taking e9 is advantageous to the current player. \n\nHowever, on a intermediate position similar to the initial position as shown in \\ref{pos:5heaps_2}, the search is misled by the policy network that is obsessed with the losing e8 move as the prior probability assigned to the move is 97.4\\% which is overwhelmingly larger than that of other moves. This leads to the misfortune that the winning move still is rejected after 4 million simulations even though the value network predicts the value of the position after e8 accurately. \n\n\\begin{figure}[H]\n \\centering\n \\subfloat[\\label{pos:5heaps_2}{5 heaps: [1, 3, 5, 5, 9]}]{\\includegraphics[width=0.3\\textwidth]{figures\/5_piles_nim_position.eps}}\n \\centering\n \\hspace{1.5em}\n \\subfloat[Evaluations from policy and value network for the 4 moves with the highest prior probabilities.]{\n \\begin{tabular}{p{0.25\\textwidth}p{0.08\\textwidth}p{0.05\\textwidth}p{0.05\\textwidth}p{0.05\\textwidth}}\\hline\n \\toprule\n \\textbf{Move} & e8 & b1 & a1 & d4 \\\\ \n \\midrule\n \\textbf{Is Winning Move} & no & no & no & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 97.4\\% & 0.7\\% & 0.4\\% & 0.2\\%\\\\\n \\midrule\n \\textbf{Win Probability} & 0.14\\% & 5.6\\% & 12.5\\% & 9.8\\% \\\\\n \\midrule\n \\textbf{V-value} & -0.99 & -0.88 & -0.74 & -0.80 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\vspace{1em}\n \\subfloat[\\label{tab:heaps5_2}The posterior probability of the winning move e7 given different number of MCTS simulations]{ \n \\begin{tabular}{cccccc}\n \\toprule\n & \\multicolumn{5}{c}{\\textbf{Number of simulations}} \\\\\n \\midrule\n \\textbf{Winning Move}& 64 & 256 & 1024 & 65536 & 4194304 \\\\ \n \\midrule\n e7 & 1.56\\% & 0.39\\% & 0.09\\% & 0.0015\\% & 0.00021\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{\\label{fig:alphazero_5piles_2} The only winning move available in this position is e7. The policy network offers completely wrong estimation. The move e8, a losing move, is assigned with 97.4\\% prior probability. Albeit the value network predicts the resulting position of taking e8 is disadvantageous for the current player with high confidence, the algorithm fails to find out the winning move after more than 4 million simulations.}\n\\end{figure}\n\nFor the initial position of the 6 heaps nim as shown in \\ref{pos:6heaps_1}, the policy network strongly promotes the winning move b2, and the value network also evaluates the resulting positions correctly with high confidence.\n\n\\begin{figure}[H]\n \\centering\n \\subfloat[\\label{pos:6heaps_1}{6 heaps: [1,3,5,7,9,11]}]{\\includegraphics[width=0.3\\textwidth]{figures\/6piles_nim_position.eps}}\n \\centering\n \\hspace{1.5em}\n \\subfloat[\\label{tab:heaps6_1}Evaluations from policy and value network for the 4 moves with the highest prior probabilities.]{\n \\begin{tabular}{lcccc}\\hline\n \\toprule\n \\textbf{Move} & b2 & a1 & b1 & d6 \\\\ \n \\midrule\n \\textbf{Winning Move} & yes & no & no & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 92.8\\% & 4.0\\% & 0.64\\% & 0.63\\% \\\\\n \\midrule\n \\textbf{Win Probability} & 78.0\\% & 14.3\\% & 4.76\\% & 2.17\\% \\\\\n \\midrule\n \\textbf{V-value} & 0.56 & -0.71 & -0.90 & -0.95 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\caption{\\label{fig:alphazero_6piles_1} The policy and value network succeeds in predicting the winning move and estimating the resulting position. The probability assigned to the winning move b2 exceeds the second largest probability which is assigned to move a1 by a large margin. }\n\\end{figure}\n\nThere are also positions with 6 heaps where the policy and value network both stumbled, one of which is shown in \\ref{pos:6heaps_2}. However, unlike the situation with a 5 heaps nim position as shown in Fig. \\ref{fig:alphazero_5piles_2}, the prior probabilities assigned to the these winning moves are not high enough to completely mislead the search. As shown in the table \\ref{tab:heaps6_2}, the winning move f10 is identified by the MCTS after 65536 simulations and the posterior probability of choosing it increases as more simulations were conducted. \n\n\\begin{figure}[!h]\n \\centering\n \\subfloat[\\label{pos:6heaps_2}{6 heaps: [1,3,3,5,4,10]}]{\\includegraphics[width=0.3\\textwidth]{figures\/6_piles_nim_position.eps}}\n \\hspace{1.5em}\n \\subfloat[Evaluations from policy and value network for the 4 moves with the highest prior probabilities.]{\n \\begin{tabular}{lcccc}\\hline\n \\toprule\n \\textbf{Move} & f3 & f2 & f4 & f9 \\\\ \n \\midrule\n \\textbf{Winning Move} & no & no & no & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 47.4\\% & 12.6\\% & 10.1\\% & 7.5\\% \\\\\n \\midrule\n \\textbf{Win Probability} & 0.18\\% & 1.03\\% & 0.03\\% & 81.2\\% \\\\\n \\midrule\n \\textbf{V Value} & -0.99 & -0.97 & -0.99 & 0.62 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\vspace{1em}\n \\subfloat[\\label{tab:heaps6_2}The posterior probability of the winning move f10 given different number of MCTS simulations]{ \n \\begin{tabular}{cccccc}\n \\toprule\n & \\multicolumn{5}{c}{\\textbf{Number of simulations}} \\\\\n \\midrule\n \\textbf{Winning Move}& 64 & 256 & 1024 & 65536 & 4194304 \\\\ \n \\midrule\n f10 & 4.68\\% & 1.17\\% & 0.29\\% & 53.7\\% & 99.1\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\\label{fig:alphazero_6piles_2} In this position, the only winning move is f10. However, the top 4 probabilities yielded by the policy network are all assigned to losing moves, causing the algorithm fail to find the winning even after more than 1000 simulations. Fortunately, the value network predicts with high confidence that the resultant positions from move f3, d2 and f4 are in favor of the opponent of the current player, making the winning move f10 stand out after 65536 simulations and the confidence is boosted with more simulations.}\n\\end{figure}\n\nThe initial position of the 7 heaps nim as shown in the Fig. \\ref{pos:7heaps}, the prior probability by the policy network are almost equal for all the moves, being either winning or losing move. The value network evaluates all the resulting positions as having around 50\\% winning probability. These predictions apparently contribute nothing to the search to the extent where the winning moves are not recognized by the MCTS after more than 4 million simulations. \n\n\\begin{figure}[H]\n \\centering\n \\subfloat[\\label{pos:7heaps}{7 heaps: [1, 3, 5, 7, 9, 11, 13]}]{\\includegraphics[width=0.3\\textwidth]{figures\/7piles_nim_position.eps}}\n \\centering\n \\hspace{1.5em}\n \\subfloat[Evaluations from policy and value network for the 4 moves with the highest prior probabilities.]{\n \\begin{tabular}{lcccc}\\hline\n \\toprule\n \\textbf{Move} & c4 & c3 & c2 & c5 \\\\ \n \\midrule\n \\textbf{Winning Move} & no & no & no & no \\\\ \n \\midrule\n \\textbf{Prior Probability} & 4.37\\% & 4.06\\% & 3.95\\% & 3.88\\% \\\\\n \\midrule\n \\textbf{Win Probability} & 50.1\\% & 49.5\\% & 48.9\\% & 50.1\\% \\\\\n \\midrule\n \\textbf{V Value} & 0.003 & -0.009 & -0.02 & 0.003 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{1em}\n }\n \\vspace{1em}\n \\subfloat[The posterior probability of the winning moves e7, f7 and g11 given different number of MCTS simulations]{ \n \\begin{tabular}{cccccc}\n \\toprule\n & \\multicolumn{5}{c}{\\textbf{Number of simulations}} \\\\\n \\midrule\n \\textbf{Winning Move}& 64 & 256 & 1024 & 65536 & 4194304 \\\\ \n \\midrule\n e7 & 1.56\\% & 1.17\\% & 1.26\\% & 2.79\\% & 0.38\\% \\\\\n \\midrule\n f7 & 1.56\\% & 0.17\\% & 1.66\\% & 0.89\\% & 1.15\\% \\\\\n \\midrule\n g11 & 1.56\\% & not visited & 1.07\\% & 1.61\\% & 0.65\\% \\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{\\label{fig:alphazero_7piles} The 4 moves with the highest prior probabilities are losing moves, and the policy network does not particularly favor any of them. The evaluation from the value network on these positions is near 0, showing they all have 50\\% winning probability.}\n\\end{figure}\n\nIn nim with 5, 6 and 7 heaps, there are positions where the policy and value networks succeed and positions where they fail. In general the confidence of choosing the winning move decreases and the difficulty in accurately evaluating positions increases as the board size (and state-space) grows. \n\nOn relatively small boards, the parity issues pertaining to compute the correct nimsum do not get involved as calculating the parity is feasible, as shown in \\ref{sec:value_network}. Learning nim on larger boards is dramatically more difficult because finding the right move and correctly evaluating large board positions is parity related, and the huge state space forces the policy and value network to generalize to unseen states, which as we have argued poses a fundamental challenge to RL algorithms.\n\n\\section{Concluding remarks and conjectures}\n\\label{sec:conclusion}\n\nThe AlphaZero paradigm can be seen as part of a larger ambition to build intelligent systems that can learn to solve complex tasks by themselves. The ambition, as articulated by Demis Hassabis, the CEO of Deep Mind, is to solve intelligence and then use it to solve everything else \\cite{sadler2019game}. While this ambition is truly inspiring, the results in this paper remind us that thinking - even in strategy games - varies fundamentally in nature. General AI will need to handle different modes of thinking. \n\nFrom a human perspective, games like chess, Go, and shogi somehow feel distinct from nim and impartial games. From a human point of view, the former games have clear criteria for good play. Progress in learning these games is typically incremental, and pattern recognition plays a central role. On the contrary, nim can be mastered by humans but through an entirely different thought process and analysis. Learning nim typically happens in non-incremental steps (like was the case for our LSTM learning experiments discussed in section \\ref{sec:value_network}). It seems inconceivable that a human could learn to master nim on a large board without also having solved the game for any board size. Thus, when humans master nim, transfer learning and abstract generalisation play an important role. \n\nAlphaZero has been truly groundbreaking, but new ideas are needed to expand reinforcement learning to games that, like nim seem to require high-level non-associative thinking. For humans, counting is a straightforward process. Even young children understand that the number of objects is an invariant, so recounting should (in principle) lead to the same number. In mathematics, this principle is referred to as the pigeon-hole principle; however, as we explained, such basic counting principles require a kind of insight that seems challenging for AIs to develop solely one their own. \n\nWe acknowledge that \\textit{parity is a well-known nuisance for neural networks, trivial to compute but loved by theoreticians to bang NN engineers over the head}. The motivation of our work is not to bang anyone on their head, but to understand how the AlphaZero paradigm can be expanded. The concept of self-play in model based AI is just one way an agent can learn by \"exploring around\". However, to fully understand this, it is essential to understand the actual way self-play NN-based RL algorithms learn. This is why we looked carefully and closely at the way the AlphaZero clone LC0 played chess. And it is why this paper on impartial games might help navigate further research on expanding AlphaZero style learning. \n\nParity-related problems occur naturally for a large class of combinatorial games. However, we discovered that the difficulty of learning to master these games is much more dramatic than just learning parity. The problem is robust and tenacious, and fiddling with the hyperparameters of the algorithm does not seem to have much effect. We did break down the AlphaZero style algorithm and checked the various components separately. And we even tested the parity issues with novel architectures that were not part of the original AlphaZero algorithms. \n\nThere are many factors impacting the performance of our AlphaZero style nim algorithm. There is an unlimited number of settings so it is impossible to try all of them out. Proving the results rigorously seems well outside current theoretical techniques. From a philosophy of science perspective, one can always object that a specific pattern might fail at values not tested. If such objections were taken seriously, science would be impossible. Consider experiments suggesting some formula e.g. $F = ma$. It is always (logically) possible that the formula would seriously fail for values not tested. It is always possible to make this kind of objection to an experimental result. Experimental results should always seen as part of a wider theory and understanding that is aligned with experiments but in principle could be falsified \\cite{popper1972objective}.\n\nWe anticipated that nim would be practically unlearnable by AlphaZero style algorithms on boards with 50+ heaps. However, to our surprise, the practical upper bound of the learnability was much lower than we expected, as the algorithms experienced substantial limitations already with seven heaps. Our work also shows there is an issue when attempting to applying NNs (e.g. Stockfish NNUE style NN) to guide the search in the new algorithms \\cite{nasu2018efficiently, maharaj2021chess} for impartial game, due to the difficulty of the networks guiding the search. \n\nAnother point we would like to stress is that the difficulty of learning nim on small boards is not even due to parity issues. The parity required to compute correct nim sums etc has not kicked in on small boards as learning parity for small values of n, (\\textit{e.g.} $n=7$ for 7 piles) is, as our experiments showed, pretty feasible. \nOn the board $[1,3,5,7,9,11,13]$ we established that no positive feedback loop occurs and the policy and value network essentially both drift around without the ability to learn anything besides memorizing some heuristics derived from MCTS. And remarkably,at least with the resources we had available, this happened despite that the state space is relatively small and most states will be seen multiple times during training if the all the positions are fully explored. On larger boards, where the state space exceeds any number of states that feasibly can be reached during training, the value and policy network needs to generalise to unseen positions. Failing to generalise adds additional noise to the learning as the evaluation on some positions becomes random and uncorrelated with correct values, preventing the positive feedback mechanism of RL from functioning properly. Added to this difficulty, on larger boards the difficulty of learning the parity function also kicks in an already very noisy situation. \n \nAlphaZero employs a strategy that combines evaluation with search guided calculation. However, some of aspects of impartial games seem to require mathematical thinking guided by abstract, symbolic reasoning. Some PSPACE-complete impartial games, \\textit{e.g.} node kayles and geography, can mimic NP-hard problems which are intractable. Thus any algorithm that could learn any of these games to perfection would be able break current cryptography. However, other impartial games can be solved by mathematical reasoning. Thus, it is possible to express optimal strategies in simple mathematical terms, \\textit{e.g.} sprout that has been analysed to a deep level might eventually be solvable by AI with sufficient built-in reasoning abilities. \n\nDespite the success of AlphaZero, our work has shown that fundamentally new ideas are needed for an AlphaZero style approach to be successful for impartial games. When humans learn to master nim, they might scribble on a piece of paper, play around with small toy examples, form conjectures etc. Maybe, it is possible to apply AlphaZero style reinforcement learning in an extended setting that takes auxiliary actions external to the game into account, such as applying abstract transformations, or reading and writing to an external memory. These meta-actions, analogous to the actions the algorithm takes during simulations, are not directly linked to the move it makes but significantly boost its ability to plan forward. The results in this paper indicates that new ideas will be needed to make such ideas work.\n\n\n\\section*{Acknowledgment}\n\nThis work was funded by the Chinese Scholarship Council (CSC). We appreciate the assistance of ITs Research team at the University of Queen Mary for supporting us in using the Apocrita HPC facility. \nFinally, we would like to thank the Leela Chess Zero development team for providing detailed instructions for the use and workings of LC0. \n\n\n\\printbibliography\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \n The universe's matter content is dominated by the elusive dark-matter (DM), which has been one of the main topics in astronomical research. The idea of a dark or invisible mass was proposed numerous times based on the motions of stars in the Milky Way disk \\citep{OortJ_32a}, the motion of galaxies in the Coma cluster \\citep{ZwickyF_33a} and by the lesser known argument made by \\citet{PeeblesP_67a} using an upper limit on the mean mass density of galaxies from the average spectrum of galaxies (i.e. from the night-sky brightness). Nonetheless, the concept of DM became part of mainstream research only in the 1970s, based on the remarkable fact that the rotation curves (RC) of massive galaxies remain flat at large galactocentric distances \\citep{RubinV_70a}. It was quickly realized that these flat rotation curves at large radii could not be explained by the Newtonian gravity of the visible matter alone, but instead implied the presence of an unobserved mass component attributed to a DM halo.\n \nToday, the cold-DM (CDM) framework in which the large scale structure originates from the growth of the initial density fluctuations \\citep{PeeblesP_70a,PeeblesP_74a} is very successful in reproducing the large scale structure \\citep[e.g.][]{SpringelV_06a}. However, understanding the nature and properties of DM on galactic scales remains one of the greatest challenges of modern physics and cosmology \\citep[see ][for a review]{BullockJ_17a}. \n\nIn this context, disentangling and understanding the relative distributions of baryons and dark matter in galaxies is still best achieved from a careful analysis of galaxies' RCs on galactic scales. At redshift $z=0$, this type of analysis is mature with a wealth of studies published in the past 20-30 years, using a variety of dynamics tracers such as \\ion{H}{I}\\ \\citep[e.g.][]{deBlokW_97a,deBlokW_01a,VandenBoschF_00a}, \\hbox{{\\rm H}$\\alpha$}\\ in the GHASP survey \\citep{SpanoM_08a,KorsagaM_18a,KorsagaM_19b} or a combination of \\ion{H}{I}{} \\& \\hbox{{\\rm H}$\\alpha$}\\ as in the recent SPARC sample \\citep{AllaertF_17a,KatzH_17a,LiLelli_20a} and the DiskMass survey \\citep{BershadyM_10a,MartinssonT_13a}. These studies have shown that, in low surface brightness (LSB) galaxies, the DM profiles have a flat density inner 'core', contrary to the expectations from DM-only simulations that DM haloes ought to have a steep central density profiles or 'cusp' \\citep[e.g.][NFW]{NavarroJ_97a}.\nThis cusp-core debate may be resolved within CDM with feedback processes \\citep[e.g.][]{NavarroJ_96b,PontzenA_12a,TeyssierR_13a,DiCintioA_14a,LazarA_20a,FreundlichJ_20a} transforming cusps into cores~\\footnote{Recently, \\citet{PinedaJ_17a} argued that NFW profiles can be mistaken as cores when the PSF\/beam is not taken into account.}, a process that could be already present at $z=1$ \\citep{TolletE_16a}.\nDM-only simulations in the $\\Lambda$CDM context have made clear predictions for the properties of DM halos, such as their concentration and evolution \n \\citep[e.g.][]{BullockJ_01b,EkeV_01a,WechslerR_02a,DuffyM_08a,LudlowA_14a,DuttonA_14a,CorreaC_15c}, but the $c-M$ relation remains untested beyond the local universe in SFGs \\citep[e.g.][]{AllaertF_17a,KatzH_17a}. \n \n At high redshifts, where 21cm observations are not yet available, in order to measure the DM content of high-redshift galaxies, one must measure the kinematics in the outskirts of individual star-forming galaxies (SFGs) using nebular lines (e.g. \\hbox{{\\rm H}$\\alpha$}), at radii up to 10-15 kpc (2-3 times the half-light radius \\hbox{$R_{\\rm e}$}) where the signal-to-noise ratio (S\/N) per spaxel drops approximately exponentially and quickly falls below unity. \nDisk-halo decompositions have proven to be possible at $z\\simeq2$ in the pioneering work of \\citet{GenzelR_17a} using very deep ($>30$ hr) near-IR integral field spectroscopy (IFS) on a small sample of six massive star-forming galaxies (SFGs). Exploring lower mass SFGs, this exercise requires a stacking approach \\citep[as in][]{LangP_17a,TileyA_19b} or deep IFS observations \\citep[as in][]{GenzelR_20a}. These studies of massive SFGs with $M_\\star>10^{11}\\hbox{M$_{\\odot}$}$ showed that RCs are declining at large radii, indicative of a low DM fraction within \\hbox{$R_{\\rm e}$}; see also \\citet{WuytsS_16a,UblerH_17a,AbrilV_21a} for dynamical estimates of DM fractions.\n \n Recently, 3D algorithms such as \\textsc{ GalPaK$^{\\rm 3D}$}\\ \\citep{BoucheN_15a} or \\textsc{$^{\\rm 3D}$Barolo} \\citep{DiTeodoroE_15a} have pushed the limits of what can be achieved at high-redshifts.\n For instance, one can study the kinematics of low mass SFGs, down to $10^8$ \\hbox{M$_{\\odot}$}\\ \\citep[as in][]{BoucheN_21a} in the regime of low S\/Ns\n or study the kinematics of SFGs at large galactic radii $\\sim3\\times \\hbox{$R_{\\rm e}$}$ as in \\citet{SharmaG_21a}, when combined with stacking techniques. Most relevant for this paper,\ndisk-halo decompositions of distant galaxies have been performed with \\textsc{$^{\\rm 3D}$Barolo} at $z\\simeq4$ on bright submm [CII] ALMA sources \\citep{RizzoF_20a,NeelemanM_20a,FraternaliF_21a}.\nIn addition, when used in combination with stacking or lensing, 3D algorithms are powerful tools to extract resolved kinematics at very high-redshifts as in \\citet{RizzoF_21a}.\n \n \n This paper aims to show that a disk-halo decomposition can be achieved for {\\it individual} low-mass SFGs at intermediate redshifts ($0.63$ ($>20$), respectively. This corresponds to a logarithmic difference $\\Delta \\ln \\cal Z$ of 2 and 6, respectively. \nThus, we use a minimum $\\Delta \\ln \\cal Z$ of 6 as our threshold to discriminate between models.\nTable~\\ref{table:evidence} shows the logarithmic difference of the Bayes factors, $\\Delta\\ln\\cal Z$, for the NFW DM models with respect to the fiducial DC14 models.\n\n\n\n \n \n \n \n\n\\subsection{Stellar rotation from HST photometry}\n\\label{section:mge}\n\n \n\n{In order to independently estimate the contribution of the stellar component to the RC, we parameterized the light distribution of HST\/F160W images with the MGE method \\citep{MonnetG_92a,EmsellemE_94b}\\footnote{An implementation of the method \\citep{CappellariM_02a} is available at \\url{https:\/\/www-astro.physics.ox.ac.uk\/ mxc\/software\/}}. For each galaxy we made an MGE model by considering the PSF of the HST\/F160W filter, removing the sky level and masking any companion galaxies or stars. Each MGE model consists of a set of concentric two-dimensional Gaussians defined by the peak intensity, the dispersion and the axial ratio or flattening. The Gaussian with the lowest flattening is critical as it sets the lower limit to the inclination at which the object can be projected \\citep{MonnetG_92a}. Therefore, following the prescription from \\citet{ScottN_13a}, we also optimise the allowed range of axial ratios of all MGE models until the fits become unacceptable. { In practice, convergence is achieved when the mean absolute deviation of the model \nfor a given axial ratio pair increases by less than 10 per cent over the previous step.} Finally, we convert the Gaussian peak counts to surface brightness using the WFC3 zeropoints from the headers, and then to surface density (in L$_\\odot$ pc$^{-2}$) adopting 4.60 for the absolute magnitude for the Sun in the F160W \\citep{WillmerC_18a}.\n\nWe follow the projection formulas in \\citet{MonnetG_92a} and the steps outlined in \\citet{EmsellemE_94a,EmsellemE_94b}\n to determine the gravitational potential for our MGE models \\cite[see also Appendix A of][]{CappellariM_02b}. \nThe critical parameters here are the distance, inclination, and the mass-to-light ratio of the galaxy. The distances are simply calculated from the redshifts and our assumed Planck 2015 cosmology. \n\nAs we assume that the stellar component is distributed in a disk, we use the axial ratio of galaxies measured from the HST\/F160W images to derive the inclinations of galaxies. An alternative approach would be to use the inclinations returned from the \\textsc{ GalPaK$^{\\rm 3D}$} models, which lead to almost identical results.\n\nWe estimate the mass-to-light ratios of galaxies combining the stellar masses obtained from photometric SED fits (see \\S~\\ref{section:sample}) and the total light obtained from the MGE models. Finally we use the module {\\tt mge\\_vcirc} from the JAM code \\citep{CappellariM_08a} to calculate the circular velocity in the equatorial plane of each galaxy. \n \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figs\/figure3.png}\n\\caption{Galaxies stellar masses. Comparison between the stellar mass $M_{\\star}$ obtained from \\textsc{ GalPaK$^{\\rm 3D}$}\\ disk-halo fits and the SED-based $M_{\\star}$ derived from the {\\it HST} photometry. The error bars represent the 95\\% confidence intervals.\nThe $M_{\\star}$ obtained with \\textsc{ GalPaK$^{\\rm 3D}$}\\ (one of the 14 free parameters in \\S~\\ref{section:disk:halo}) and from {\\it HST} photometry are completely independent, except for ID912 (open circle). The dashed line shows the 1:1 line and this figure shows the two are in excellent agreement, except for ID919 and ID943.\n}\n\\label{fig:mdisk}\n\\end{figure}\n\n\n\\begin{figure*}\n\\centering \n\\includegraphics[width=0.9\\textwidth]{figs\/figure4.png}\n\\caption{Disk-halo decompositions for the 9\\ galaxies in our sample (ordered by increasing $M_\\star$).\nThe solid black line represents the total rotation velocity $v_\\perp(r)$. All velocities are `intrinsic', i.e. corrected for inclination and instrumental effects.\nThe dot-dashed line represents the circular velocity $v_{\\rm c}(r)$, i.e. $v_\\perp(r)$ corrected for asymetric drift.\nThe gray band represents the intrinsic universal rotation curve (URC) using the parametrization of PSS96 as in \\Fig{fig:examples}.\nThe solid red (blue) line represents the stellar (gas) component $v_\\star(r)$ obtained from \\textsc{ GalPaK$^{\\rm 3D}$}\\ modeling of the MUSE \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ data.\nThe dotted red line represents the stellar component obtained using a MGE decomposition of the {\\it HST}\/F160W stellar continuum images.\nThe green line represents the DM component.\nThe vertical dotted lines are as in \\Fig{fig:examples}.\n\\label{fig:diskhalo}\n}\n\\end{figure*}\n\n\n\\section{Results}\n\\label{section:results}\n\n \n\n\n\\subsection{The diversity of rotation curve shapes}\n\n\nIn Figure \\ref{fig:examples}, we show the morpho-kinematics of the galaxies used in this study.\nThe first column shows the stellar continuum from {\\it HST}\/F160W.\nThe second column shows the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ flux map obtained from the \\textsc{CAMEL}\\footnote{Available at \\url{https:\/\/gitlab.lam.fr\/bepinat\/CAMEL}} algorithm \\citep{EpinatB_12a}.\nThe third column shows the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ surface brightness profile as a function of radius $r$, in units of $\\hbox{$R_{\\rm e}$}$. \nThe fourth column shows the observed 2-dimensional velocity field $v_{\\rm 2d}$ obtained from \\textsc{CAMEL}.\nThe fifth column shows the intrinsic rotation velocity $v_{\\perp}(r)$ corrected for inclination and instrumental effects (beam smearing, see \\S~\\ref{section:methodology}), using the parametric model of PSS96 (see \\S~\\ref{section:kinematics}). The vertical dotted lines represent the radius at which the S\/N per spaxel reaches 0.3, and indicates the limits of our data.\nThe last column shows the residual map, obtained from computing the standard deviation in the residual cube along the wavelength direction.\n\nThis figure shows that $z=1$ RCs have diverse shapes \\citep[as in][]{TileyA_19b,GenzelR_20a} with mostly increasing but some presenting declining RCs at large radii as in \\citet{GenzelR_17a}. The diversity, albeit for a smaller sample, is similar to the diversity observed at $z=0$ \\citep[e.g.][]{PersicM_96a,CatinellaB_06a,MartinssonT_13b,KatzH_17a}.\n\n \n\n\n\\subsection{The disk-halo decomposition}\n\nWe now turn to our disk-halo decomposition using the method described in \\S~\\ref{section:disk:halo}.\nFor each SFG, we ran several combinations of disk-halo models, such as different halo components (DC14\/NFW), different disk components (Freeman\/MGE), with or without a bulge, with various asymmetric drift corrections and chose the model that best fit the data for each galaxy according to the model evidence. \n We find that the DC14 halo model is generally preferred over a NFW profile and the resulting model parameters are listed in Table~\\ref{tab:results}.\nThe evidence for the DC14 models is discussed further in \\S~\\ref{section:cores}.\n \n\n Before showing the disk-halo decompositions, we compare the disk stellar mass $M_\\star$ ($M_\\star$ being one of the 14 free parameters) obtained from the 3D fits with the SED-derived $M_\\star$.\n This comparison is performed in \\Fig{fig:mdisk} where the total $M_\\star$ (disk$+$bulge from our fits) is plotted along the $x$-axis. \nThis figure shows that there is relatively good agreement between the disk mass estimates from our \\textsc{ GalPaK$^{\\rm 3D}$}\\ model fits (described in \\S~\\ref{section:disk:halo}) and the SED-based ones, except for ID919 and ID943.\nThis figure shows that our 3D disk-halo decomposition yields a disk mass consistent with the SED-derived $M_\\star$, \nand thus opens the possibility to constrain disk stellar masses from rotation curves of distant galaxies for kinematically non-disturbed galaxies.\n \n\n\n\nThe disk-halo decompositions (deprojected and `deconvolved' from instrumental effects) \nusing our 3D-modeling approach with \\textsc{ GalPaK$^{\\rm 3D}$}\\ are shown in Figure~\\ref{fig:diskhalo},\nwhere the panels are ordered by increasing $M_\\star$ as in \\Fig{fig:f160w}.\nThe disk\/DM models used are listed in \\Tab{tab:results}.\nIn each panel, the solid black line shows the total rotation velocity $v_\\perp(r)$ corrected for asymetric drift. All velocities are `intrinsic', meaning corrected for inclination and instrumental effects, while the dot-dashed line represents the circular velocity $v_{\\rm c}(r)$.\nThe gray band represents the URC model as in \\Fig{fig:examples}.\nThe solid green, red and blue lines represent the dark-matter $v_{\\rm dm}(r)$, stellar $v_{\\star}(r)$, and gas components $v_{\\rm g}(r)$, respectively.\nThe dotted red lines represent the stellar component obtained from the {\\it HST}\/F160W images as discussed in \\S~\\ref{section:mge}.\n\nComparing the solid with the dotted red lines in \\Fig{fig:diskhalo}, one can see that there is generally good agreement between $v_{\\star}(r)$ obtained from the {\\it HST} photometry and from our disk-halo decomposition with \\textsc{ GalPaK$^{\\rm 3D}$}\\ of the MUSE data, except again for ID919 and ID943. This comparison shows that the disk-halo decomposition obtained from the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ line agrees with the $v_{\\star}$ from the mass profile obtained on the {\\it HST} photometry. {One should note that the stellar mass $M_\\star$ from SED fitting is not used as a prior in our \\textsc{ GalPaK$^{\\rm 3D}$}\\ fits, except for ID937 because the data for this galaxy prefers a NFW profile, which then becomes degenerate with $M_\\star$.}\nFor the interested reader, {the potential degeneracies between $M_\\star$ and $M_{\\rm vir}$ are shown in \\Fig{fig:corner}.}\n\n\n\n\n\n\n\\subsection{The stellar-to-halo mass relation} \n \n The $M_\\star-M_{\\rm vir}$ relation in $\\Lambda$CDM is a well-known scaling relation that reflects \n the efficiency of feedback. Hence, measuring this scaling relation in individual galaxies is often seen as a crucial constraint on models for galaxy formation. \n This scaling relation can be constructed from abundance matching techniques \n \\citep[e.g.][]{ValeA_04a,MosterB_10a,BehrooziP_13b,BehrooziP_19a}.\n Observationally, the $z=0$ stellar-to-halo relation has been constrained by numerous authors using a variety of techniques such as weak lensing and\/or clustering \\citep[e.g.][]{LeauthaudA_12a,MandelbaumR_16a}. \nDirect measurements of the $M_\\star-M_{\\rm vir}$ relation on individual galaxies using rotation curves have been made on various samples of dwarfs \\citep{ReadJ_17a}, spirals \\citep{AllaertF_17a,KatzH_17a,LapiA_18a,PostiL_19b,DiPaoloC_19a} and early type galaxies \\citep{PostiL_21a} among the most recent studies,\nand these have found a very significant scatter in this relation.\n \nIn \\Fig{fig:Behroozi} (left), we show the stellar-to-halo mass ratio $M_{\\star}\/M_{\\rm vir}$ as a function $M_\\star$. The blue (gray) contours show the expectation for $z=1$ SFGs in the TNG100\/50 simulations\n and the solid lines represent the $M_\\star\/M_{\\rm vir}$ relation from \\citet{BehrooziP_19a}.\n\\Fig{fig:Behroozi} (left) shows that our results are qualitatively in good agreement with the Behroozi relation.\n \n\n\\citet{RomeoA_20a} argued that disk gravitational instabilities are the primary driver for galaxy scaling relations. Using a disk-averaged version of the \\citet{ToomreA_64a} $Q$ stability criterion~\\footnote{\\citet{ObreschkowD_16a} used similar arguments to derive the \\ion{H}{I}\\ mass fractions.}, \\citet{RomeoA_20a}\nfind that \n\\begin{equation}\n=\\frac{j_i\\hat\\sigma_i}{G M_i}=A_i \n\\label{eq:jRomeo:Q}\n\\end{equation}\n where $i=\\star,\\ion{H}{I}$ or $H_2$, $\\hat\\sigma_i$ is the radially averaged velocity dispersion, and $j_i$ is the total specific angular-momentum.\nFor $i=\\star$, $A_i\\approx 0.6$.\n\nConsequently, for the stellar-halo mass relation with $i=\\star$, $M_\\star\/M_{\\rm vir}$ ought to correlate with \\citep{RomeoA_20b}:\n\\begin{equation}\n\\frac{M_\\star}{M_{\\rm vir}}\\simeq \\frac{j_\\star \\hat{\\sigma}_\\star}{G M_{\\rm vir}}\n\\label{eq:jRomeo}\n\\end{equation}\nwhere $j_\\star$ is the stellar specific angular momentum, $\\hat \\sigma_\\star$ the radially averaged stellar dispersion. \nWe can estimate $j_\\star$ using the ionized gas kinematics, namely $\\log j_\\star=\\log j_{\\rm gas}-0.25$ as in \\citet{BoucheN_21a}.\nThe dispersion $\\hat\\sigma_\\star$ is not directly accessible, but we use the scaling relation with $M_\\star$ ($\\hat\\sigma_\\star\\propto M_\\star^{0.5}$) from \\citet{RomeoA_20b} which followed from the \\citet{LeroyA_08a} analysis of local galaxies. \n\\Fig{fig:Behroozi} (right) shows the resulting stellar-to-halo mass ratio using $M_{\\star}$ from SED and the $M_{\\rm vir}$ values obtained from our disk-halo decomposition, where the inset shows the sample has $\\approx 0.7$, close to the expectation (\\Eq{eq:jRomeo:Q}).\n \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figs\/figure5a.png}\n\\includegraphics[width=0.45\\textwidth]{figs\/figure5b.png} \n\\caption{The total stellar-to-halo fraction. {\\it Left}: The total stellar-to-halo fractions $M_\\star\/M_{\\rm vir}$ as a function of the stellar mass $M_{\\star}$ obtained from our 3D fits.\nThe error bars from our data are 95\\%\\ confidence intervals, and\n the open circles show the sample of \\citet{GenzelR_20a}.\n The shaded (blue contours) histogram shows the location of SFGs in the TNG simulations for $z=1$ centrals, while the gray contours show the satellites.\nThe colored lines show the \\citet{BehrooziP_19a} relation inferred from semi-empirical modeling at redshifts $z=0.5,1.0,1.5$, respectively. \n{\\it Right}: The total stellar-to-halo fractions $M_\\star\/M_{\\rm vir}$ as a function of $G M_{\\rm vir}\/j_\\star\\sigma_\\star$\n(\\Eq{eq:jRomeo}) for the galaxies in our sample. \nThe inset histogram shows that the sample has $ {j_\\star\\hat\\sigma_\\star}\/{G M_\\star}\\approx 0.7$ ($\\equiv < Q_\\star>$, \\Eq{eq:jRomeo:Q}), see text.\n}\n\\label{fig:Behroozi}\n\\end{figure*}\n\n\n \n\n\\subsection{DM fractions in $z=1$ SFGs}\n\nUsing the disk-halo decomposition shown in \\Fig{fig:diskhalo}, we turn towards the DM fraction within $\\hbox{$R_{\\rm e}$}$, $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$, by integrating the DM and disk mass profile \nto $\\hbox{$R_{\\rm e}$}$~\\footnote{\\citet{GenzelR_20a} used the ratio of velocities $f_{\\rm DM}^v\\equiv v^2_{\\rm dm}\/v_{\\rm tot}^2$, whereas we use the mass ratio, $f_{\\rm DM}^m$ using the \\citet{UblerH_20a} notation, derived from the mass profiles.}.\n\\Fig{fig:fDM} shows that $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$ for the galaxies in our sample is larger than $50$\\%\\ in all cases, ranging from 60\\%\\ to 90\\%.\nThe left (right) panel of \\Fig{fig:fDM} shows $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$ as a function of $M_{\\rm vir}$ ($\\Sigma_{\\star,1\/2}$ the surface density within $\\hbox{$R_{\\rm e}$}$), respectively. \nCompared to the sample of 41 SFGs from \\citet{GenzelR_20a} (open circles), our sample extends their results to the low mass regime, with $ M_{\\star}<10^{10.5}~\\hbox{M$_{\\odot}$}$, $ M_{\\rm vir}<10^{12}~\\hbox{M$_{\\odot}$}$ and to lower mass surface densities $\\Sigma_{\\star}<10^8$~${\\rm M_{\\odot}~kpc^{-2}}$. \n\nThe relation between $f_{\\rm DM}$ and $\\Sigma_{\\star,1\/2}$ in \\Fig{fig:fDM} is tighter and follows the expectation for $z=1$ SFGs in the TNG100\/50 simulations (blue contour) \\citep{LovellM_18a,UblerH_20a}, except at high masses.\n\\citet{GenzelR_20a} already noted that the correlation with $\\Sigma_{\\star}$ is better than with $V_{\\rm vir}$ or $M_{\\rm vir}$.\nThis anti-correlation between the baryonic surface density and DM fraction has been noted at $z=0$ in several disk surveys \\citep[e.g.][see their Fig.23]{BovyJ_13a,CourteauS_15a}.\n\nIn \\S~\\ref{section:discussion:fDM}, we discuss the implications of this $f_{\\rm DM}-\\Sigma_\\star$ relation and its relation to other scaling relations.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/figure6.png}\n\\caption{ DM fractions for our SFGs. {\\bf a)} {\\it (left)}: The DM fractions within the half-light radius $\\hbox{$R_{\\rm e}$}$, $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$, as a function of halo mass, $M_{\\rm vir}$. The dashed line represent the downwards trend of \\citet{GenzelR_20a}.\n{\\bf b)}{\\it (right)}: The DM fractions within $\\hbox{$R_{\\rm e}$}$ as a function of stellar mass surface density $\\Sigma_{\\star,1\/2}$ within $\\hbox{$R_{\\rm e}$}$.\nIn both panels, the error bars from our data are 95\\%\\ confidence intervals, and\n the open circles show the sample of \\citet{GenzelR_20a}.\n The shaded (blue contours) histogram shows the location of SFGs in the TNG100 simulations for $z=1$ central SFGs, while the gray contours show the satellites.\n The dotted line represents the toy model derived from the TF relation (Eq.~\\ref{eq:toymodel}).\n}\n\\label{fig:fDM}\n\\end{figure*}\n\n \n\\subsection{DM halo properties. The $c-M$ scaling relation}\n \nHaving shown (Figs~\\ref{fig:mdisk}-\\ref{fig:diskhalo}) that the baryonic component from our 3D fits is reliable, we now turn to the DM properties of the galaxies, and in particular to the concentration-halo mass relation ($c_{\\rm vir}-M_{\\rm vir}$).\n\nThe $c-M$ relation predicted from $\\Lambda$CDM models \\citep[e.g.][]{BullockJ_01b,LudlowA_14a,DuttonA_14a,CorreaC_15c} is often tested in the local universe \\citep[e.g.][]{AllaertF_17a,KatzH_17a,LeierD_12a,LeierD_16a,LeierD_21a,WassermanA_18a}, but rarely beyond redshift $z=0$ except perhaps in massive clusters \\citep[e.g.][]{BuoteA_07a,EttoriS_10a,SerenoM_15a,AmodeoS_16a,BivianoA_17a}. \nThese generally agree with the predicted mild anti-correlation between concentration and virial mass.\n\n\n \n \\Fig{fig:cMvir}(left) shows the $c_{\\rm vir}-M_{\\rm vir}$ relation for the best 6 cases in our sample, that is\n excluding the two interacting galaxies (ID919, ID943) as well as ID15 \n because its concentration parameter remains unconstrained and degenerate with $V_{\\rm vir}$ (see \\Fig{fig:corner}b). The error bars represent $2\\sigma$ (95\\%) and are colour-coded according to the galaxy redshift.\n In \\Fig{fig:cMvir}(left), the solid lines color coded with redshift represent to the $c-M$ relation from \\citet{DuttonA_14a}. \n\nNote that in order to fairly compare our data to such predictions from DM-only (DMO) simulations, we show, in \\Fig{fig:cMvir}, the halo concentration parameter $c_{\\rm vir}$ corrected to a DM-only (DMO) halo following DC14~\\footnote{See \\citet{LazarA_20a} and \\citet{FreundlichJ_20b} for variations on this convertion.} :\n \\begin{equation}\nc_{\\rm vir, \\rm DMO} = \\frac{ c_{\\rm vir,-2} }{ 1 + 0.00003 \\times \\exp[3.4 (\\log X + 4.5)]}\n\\label{eq:DMO}.\n\\end{equation}\n { Note that the correction is important only for halos with stellar-to-halo mass ratio $\\log X>-1.5$ and that most of our galaxies (7 out of 9) have $\\log X<-1.5$.}\n \n \n \\Fig{fig:cMvir}(right) shows the corresponding scaling relation for the scaling radius $r_s$, namely the $r_{s}-M_{\\rm vir}$ relation. This relation in terms of $r_s$ is redshift independent. Several authors have shown, in various contexts (i.e. using pseudo-isothermal or \\citet{BurkertA_95a} profiles), that this quantity scales with galaxy mass or luminosity \\citep[e.g.][]{SalucciP_12a,KormendyJ_16a,DiPaoloC_19a}. For illustrative purposes, we show the recent $z=0$ sequence for low surface brightness (LSB) galaxies of \\citet{DiPaoloC_19a}.\n\n\n\n \\Fig{fig:cMvir} shows that 5 of the 6 SFGs tend to follow the expected scaling relations for DM, the exception being ID912. One should keep in mind that cosmological simulations predict a $c-M$ relation with a significant scatter \\citep[e.g.][]{CorreaC_15c}.\n To our knowledge, \\Fig{fig:cMvir} is the first test of the $c-M$ relation at $z>0$ on halos with $\\log M_{\\rm vir}\/\\hbox{M$_{\\odot}$}=11.5-12.5$ and our data appears to support the expectations from $\\Lambda$CDM.\n \nThe $c_{\\rm vir}-M_{\\rm vir}$ or $r_s-M_{\\rm vir}$ relations can be recasted as a $r_s-\\rho_s$ relation (from \\Eq{eq:rho}).\n\\Fig{fig:rhos}(left) shows the $\\rho_s-r_s$ relation and confirms the well-known anti-correlation between these two quantities with a slope of $\\approx-1$ \\citep[e.g.][]{SalucciP_00a,KormendyJ_04a,MartinssonT_13b,KormendyJ_16a,SpanoM_08a,SalucciP_12a,GhariA_19a,DiPaoloC_19a,LiLelli_19a},\nwhich has been found in a wide range of galaxies (dwarfs disks, LSBs, spirals).\nNote that these results are similar in nature, in spite of using different contexts and assumptions (namely $\\rho_0$ vs $ \\rho_{-2}$ or $\\rho_s$). A detailed investigation of the differences related to these assumptions is beyond the scope of this paper.\n\nAs discussed in \\citet{KormendyJ_04a}, this anti-correlation can be understood from the expected scaling relation of DM predicted by hierarchical clustering \\citep{PeeblesP_74a} under initial density fluctuations that follow the power law $|\\delta k|^2 \\propto k^n$ \\citep{DjorgovskiS_92a}. \\citet{DjorgovskiS_92a} showed that the size $R$, density $\\rho$ of DM halos should follow $\\rho\\propto R^{-3(3+n)\/(5+n)}$. For $n\\simeq-2$ on galactic scales, $\\rho\\propto R^{-1}$.\nThis anti-correlation is also naturally present in the $\\Lambda$CDM context as shown by \\citet{KravtsovA_98a} with numerical simulations.\nAs noted by many since \\citet{KormendyJ_04a}, the anti-correlation between $\\rho_s$ and $r_s$ implies a constant DM surface density $\\Sigma_s\\equiv\\rho_s\\,r_s$ \\citep[e.g.][]{DonatoF_09a,SalucciP_12a,BurkertA_15a,KormendyJ_16a,KarukesE_17a,DiPaoloC_19a}.\n\\Fig{fig:rhos}(Right) shows the resulting DM surface density $\\Sigma_s$ as a function of galaxy mass $M_d$. \nThe grey band represents the range of surface densities from \\citet{BurkertA_15a} for dwarfs, while the dashed line represents the range of densities\nfrom \\citet{DonatoF_09a,SalucciP_12a} for disks. \\citet{KormendyJ_04a} had found a value of $\\sim100$ \\hbox{M$_{\\odot}$}~pc$^{-2}$.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{figs\/figure7a.png}\n\\includegraphics[width=0.45\\textwidth]{figs\/figure7b.png}\n\\caption{The size of DM cores. {\\it Left}: The halo concentration-halo mass relation. The concentrations $c_{\\rm vir}$ for $z\\simeq1$ SFGs, derived from our 3D modeling of the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ rotation curves, \nare converted to a DM-only NFW equivalent $c_{\\rm vir, DMO}$ (see text).\n{\\it Right}: The DM core size $r_{s,\\rm DMO}\\equiv R_{\\rm vir}\/c_{\\rm vir, DMO}$ in kpc as a function of halo mass. The dotted line represents the observed core-mass scaling relation for $z=0$ LSBs from \\citet{DiPaoloC_19a} (see text).\nIn both panels, the solid lines represent the $c_{\\rm vir}-M_{\\rm vir}$ relation predicted by \\citet{DuttonA_14a} for DM halos, color-coded by redshift.\nThe error bars are 95\\%\\ confidence intervals ($2\\sigma$) and colour-coded also by the galaxy redshift.\n\\label{fig:cMvir}\n}\n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/figure8.png}\n\\caption{The halo scale radius-density relation at $z=1$. {\\it Left}: The $\\rho_s-r_s$ scaling relation for the galaxies shown in \\Fig{fig:cMvir}.\nThe error bars are 95\\%\\ confidence intervals ($2\\sigma$).\nFor comparison, the anti-correlation of \\citet{KormendyJ_04a,SpanoM_08a} and \\citet{DiPaoloC_19a} are shown.\n {\\it Right}: The DM surface density ($\\Sigma_s\\equiv\\rho_s\\,r_s$) as a function of galaxy mass. The anti-correlation in the left panel implies a constant DM surface density. The grey band represents the range of surface densities from \\citet{BurkertA_15a} for dwarfs. The constant densities of \\citet{KormendyJ_04a} and \\citet{DonatoF_09a} are shown as the dotted, dot-dashed lines, respectively.\n\\label{fig:rhos}\n}\n\\end{figure*}\n\n\n\n\n\\begin{table}\n\\centering\n\\small\n\\caption{Bayesian evidences for the \\textsc{ GalPaK$^{\\rm 3D}$}\\ fits. \\label{table:evidence}\n(1) Galaxy ID;\n(2) Surface brightness profile; (3) Kinematic model (DM\/Baryon); (4) External prior used; (5) Evidence $\\ln Z$ on the deviance scale; (6) Bayesian factor between `NFW' and the `DC14' models (see \\S~\\ref{section:disk:halo}). \n}\n\\begin{tabular}{rrrrrrr}\nID & $I(r)$ & $v(r)$ & Prior & $\\ln \\cal Z$ & $\\Delta\\ln\\cal Z$ \\\\\n(1) & (2) & (3) & (4) & (5) & (6)\\\\\n\\hline \n3 & {S{\\'e}rsic}\\ & DC14.MGE & \t\t& 17317 &0\\\\\n3 & {S{\\'e}rsic}\\ & NFW.MGE & $M_{\\star,\\rm SED}$ & 17312& -5 \\\\\n15& {S{\\'e}rsic} & DC14.MGE & \t\t& 8019& 0\\\\\n15& {S{\\'e}rsic}& NFW.MGE & $M_{\\star,\\rm SED}$ & 8023& +4\\\\\n37&{S{\\'e}rsic} & DC14.MGE & \t\t& 9514 & 0 \\\\\n37& {S{\\'e}rsic}& NFW.MGE & $M_{\\star,\\rm SED}$ \t& 9651& +137\\\\\n912&{S{\\'e}rsic}& DC14.MGE & \t$i_{\\star}$\t& 8829 & 0 \\\\\n912&{S{\\'e}rsic}& NFW.MGE & $i_{\\star}$, $M_{\\star,\\rm SED}$\t\t& 8931 & +102 \\\\\n919&{S{\\'e}rsic}+B& DC14.Freeman & $i_{\\star}$ & 27552 & 0 \\\\\n919&{S{\\'e}rsic}+B& NFW.Freeman & $i_{\\star}$, $M_{\\star,\\rm SED}$ & 27915 & +363\\\\\n937&{S{\\'e}rsic} & DC14.MGE & & 8632 & 0 \\\\\n937&{S{\\'e}rsic} & NFW.MGE & $M_{\\star,\\rm SED}$ & 8625 & -7 \\\\\n982&{S{\\'e}rsic} & DC14.MGE & & 6736 & 0 \\\\\n982&{S{\\'e}rsic} & NFW.MGE & $M_{\\star,\\rm SED}$ & 7040 & +304\\\\\n943&{S{\\'e}rsic} & DC14.MGE & & 15374& 0\\\\\n943&{S{\\'e}rsic} & NFW.MGE & $M_{\\star,\\rm SED}$ & 15372 & -2\\\\\n1002&{S{\\'e}rsic}& DC14.Freeman & & 8151 & 0\\\\\n1002&{S{\\'e}rsic}& NFW.Freeman & $M_{\\star,\\rm SED}$ & 8155 & +4\\\\\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\subsection{DM halos properties. Core or cuspy profiles?}\n\\label{section:cores}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figs\/figure9.png}\n\\caption{ DM density profiles in M$_\\odot\/$kpc$^3$. Each panel show $\\rho_{\\rm dm}(r)$ as a function of $r\/\\hbox{$R_{\\rm e}$}$ obtained from our disk-halo decompositions (\\Fig{fig:diskhalo}).\nThe stellar-to-halo-mass ratio ($\\log X\\equiv \\log M_\\star\/M_{\\rm vir}$) is indicated.\nThe gray bands represent the 95\\%\\ confidence interval and the dotted lines represent NFW profiles.\nThe vertical dotted lines represent the 1~kpc physical scale, corresponding to $\\approx1$ MUSE spaxel, and indicates the lower limit of our constraints. \n\\label{fig:DMprofiles}\n}\n\\end{figure*} \n\nWe now investigate the shape of DM profiles, and in particular the inner logarithmic slope $\\gamma$ ($\\rho_{\\rm dm}\\propto r^{-\\gamma}$) in order to find evidence against or for core profiles.\nThere is a long history of performing this type of analysis in local dwarfs \\citep[e.g.][]{KravtsovA_98a,deBlokW_01a,GoerdtT_06a,OhSH_11a,OhSH_15a,ReadJ_16b,KarukesE_17a,ReadJ_18a,ReadJ_19a, ZoutendijkB_21a}, \nin spiral galaxies \\citep[e.g.][]{GentileG_04a,SpanoM_08a,DonatoF_09a,MartinssonT_13a,AllaertF_17a,KatzH_17a,KorsagaM_18a,DiPaoloC_19a}\nor in massive early type galaxies often aided by gravitational lensing \n\\citep[e.g.][]{SuyuS_10a,NewmanA_13a,SonnenfeldA_12a,SonnenfieldA_13a,SonnenfeldA_15a,OldhamL_18a,WassermanA_18a}, but the core\/cusp nature of DM is rarely \ninvestigated in SFGs outside the local universe \\citep[except in][]{GenzelR_20a,RizzoF_21a}\nbecause this is a challenging task. However, owing to the high DM fractions in our sample (see \\Fig{fig:fDM}), the shape the rotation curves are primarily driven by the DM profile.\n\n \n The DM profiles $\\rho_{\\rm dm}(r)$ as a function of $r\/\\hbox{$R_{\\rm e}$}$ obtained from our 3D fits with the DC14 model are shown in \\Fig{fig:DMprofiles}. This figure shows that the NFW profile is not compatible with the majority of the SFGs. \n\\Fig{fig:DMprofiles} shows that at least three galaxies (IDs 37, 912, 982) show strong departures from a NFW profile, in particular they show evidence for cored DM profiles.\nFor these three galaxies, the logarithmic difference of the Bayes factors for the NFW profiles are $>100$ (see Table~\\ref{table:evidence}), indicating very strong evidence against cuspy NFW profiles.\nOur results are in good agreement with the RC41 sample of \\citet{GenzelR_20a} where about half of their sample showed a preference for cored profiles (their Fig.10).\n\nWe discuss the implications of these results in section \\S~\\ref{section:discussion:cores} and in a subsequent paper, we will analyze additional DM profiles for CDM \\citep[e.g.][]{EinastoJ_65a,BurkertA_95a,DekelA_17a,FreundlichJ_20b} including alternative DM models such as `fuzzy' axion-like DM \\citep{WeinbergS_78a,BurkertA_20a}, SIDM \\citep{SpergelD_00a,VogelsbergerM_13b}.\n\n\n\n\n\\section{Discussion}\n\\label{section:discussions}\n\n\\subsection{DM fractions in $z=1$ SFGs}\n\\label{section:discussion:fDM}\n\nWe return to the $f_{\\rm DM}-\\Sigma_\\star$ relation in \\Fig{fig:fDM} and its implications.\n The tight $f_{\\rm DM}-\\Sigma_\\star$ relation can be thought of as a consequence of the tight \\citet{TullyB_77a} relation (TFR) for disks as follows \\citep[see also][]{UblerH_17a}.\n Indeed, if we approximate the DM fraction within \\hbox{$R_{\\rm e}$}\\ as $f_{\\rm DM}\\approx V_{\\rm DM}^2(\\hbox{$R_{\\rm e}$})\/V_{\\rm tot}^2(\\hbox{$R_{\\rm e}$})$ \\citep{GenzelR_20a}, one has $f_{\\rm DM}=(V_{\\rm tot}^2-V^2_{\\rm max,\\star}-V^2_{\\rm gas})\/V_{\\rm tot}^2$. Thus,\n \\begin{eqnarray}\n 1-f_{\\rm DM}(\\hbox{$R_{\\rm e}$})&=& \\frac{V^2_{\\rm max,\\star}}{V_{\\rm tot}^2} (1+\\mu_g) \\propto \\frac{G M_{\\star}}{R_{\\star}} \/{M_{\\star}^{0.5}}\\nonumber\\\\\n &\\approx& \\frac{M_{\\star}^{0.5}}{R_{\\star}} (1+\\mu_g)\\propto \\Sigma_{\\star}^{0.5} (1+\\mu_g),\n \\label{eq:toymodel}\n \\end{eqnarray}\n where we used the stellar TFR, $M_{\\star}\\propto V_{\\rm tot}^4$ \\citep[e.g.][]{McGaughS_05a}, the definition of gas-to-stellar mass ratio $\\mu_g\\equiv M_{\\rm gas}\/M_\\star$ \n and the maximum stellar rotation velocity for disks $V_{\\rm max,\\star}^2\\propto G\\,M_{\\star}\/R_{\\rm e,\\star}$.\n Eq.~\\ref{eq:toymodel} shows the intimate link between the $f_{\\rm DM}-\\Sigma_\\star$ diagram and the TFR relation.\n\nMore specifically, the TFR has $M_\\star=a\\,V_{\\rm tot,2.2}^n$ with $n\\simeq4$, $a\\simeq10^{10}~\\hbox{M$_{\\odot}$}$ \\citep{McGaughS_05a,MeyerM_08a,CresciG_09a,PelliciaD_16a,TileyA_16a,UblerH_17a,AbrilV_21a} where $V_{\\rm rot,2.2}\\equiv V_{\\rm rot}\/10^{2.2}$~km\/s.\nGiven that $V_{\\rm max,\\star}^2\\equiv 0.38 \\frac{G M_\\star}{R_{\\rm d}}$ for a \\citet{FreemanK_70a} disk, $V_{\\rm max,\\star}^2\/V_{\\rm tot}^2$ becomes\n\\begin{eqnarray}\n\\frac{V^2_{\\rm max,\\star}}{V_{\\rm tot}^2}&=&0.38 \\times1.68 a \\frac{G M_\\star}{a\\,R_{\\star} }\/\\left(\\left(\\frac{M_{\\star}}{a}\\right)^{1\/n} 10^{2.2}\\right)^2 \\nonumber\\\\\n&\\approx&0.63\\, \\sqrt{\\pi}\\, \\left(\\frac{M_{\\star,a}^{2(n-2)\/n}}{\\pi R^2_{\\star}}\\right)^{0.5}\\, G a 10^{-4.4} \\hbox{\\hbox{M$_{\\odot}$}~km$^{-2}$~s$^{-2}$} \\nonumber\\\\\n&\\approx& 1.1 \\left(\\frac{M_{\\star,a}^{0.94}}{\\pi R^2_{\\star}}\\right)^{0.5}\\times \\left( \\frac{a}{10^{10}}\\right) 1.77 \\hbox{kpc}\n\\end{eqnarray}\nusing $\\hbox{$R_{\\rm e}$}=1.68 R_{\\rm d}$, where $M_{\\star,a}\\equiv M_\\star\/a$. For a $z\\approx1$ TFR with $n=3.8$ and $a=10^{9.8}$\\hbox{M$_{\\odot}$}\\ \\citep[e.g.][]{UblerH_17a}, \\Eq{eq:toymodel} results in $1-f_{\\rm DM}=\\Sigma_{\\star,9.8}^{0.5}(1+f_g)$, which is shown in \\Fig{fig:fDM} (right) as the dotted line with $f_g=0.5$ \\citep[e.g.][]{TacconiL_18a,FreundlichJ_19a}.\n\nThis exercise shows that the $f_{\\rm DM}-\\Sigma_\\star$ relation is another manifestation of the TFR as argued in \\citet{UblerH_17a}.\n\n\\subsection{Core\/cusp formation}\n\\label{section:discussion:cores}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.32\\textwidth]{figs\/figure10a.png} \n\\includegraphics[width=0.32\\textwidth]{figs\/figure10b.png} \n\\includegraphics[width=0.32\\textwidth]{figs\/figure10c.png} \n\\caption{Relation between SFR and cores. {\\it Left}: The $\\alpha,\\beta,\\gamma$ parameters as a function of $\\log M_\\star\/M_{\\rm vir}$. \nThe curves show the parametrisation of DC14 for $\\alpha,\\beta,\\gamma$ and the solid symbols represent our SFGs, excluding ID919 and 943. \n{\\it Middle}: The DM inner slope $\\gamma$ as a function of the SFR surface density $\\Sigma_{\\rm SFR}$, scaled to $z=1.5$.\n{\\it Right}: The DM inner slope $\\gamma$ as a function of the logarithmic offset from the MS, $\\delta($MS), using the \\citet{BoogaardL_18a} MS.\nDM cores are present in galaxies with higher SFR and SFR surface-densities.\n}\\label{fig:slope:best}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/figure11.png} \n\\caption{{\\it Left}: The DM density at 150pc as a function of $\\log M_\\star\/M_{\\rm vir}$. \nThe blue (red) solid circles with error bars (2$\\sigma$) show our SFGs, except ID919 and 943.\n{\\it Right}: The DM inner slope $\\gamma$ parameter at 150~pc as a function of $\\log M_\\star\/M_{\\rm vir}$. \nThe blue (red) squares represent the $z\\approx0$ dwarfs from \\citet{ReadJ_19a} whose SFR was truncated less (more) than 6 Gyrs ago.\nThe blue (red) solid circles with error bars (2$\\sigma$) show our SFGs with high (low) $\\Sigma_{\\rm SFR}$.\n}\\label{fig:slope:Read}\n\\end{figure*}\n\nOur results in \\S~\\ref{section:cores} (\\Fig{fig:DMprofiles}) indicate a strong preference for cored DM profiles for four SFGs in our sample. Several mechanisms have been invoked to explain the presence of cored DM profiles such as Warm Dark Matter \\citep[WDM,][]{BodeP_01a}, whose free streaming can suppress the small-scale fluctuations, axion-like `fuzzy' DM \\citep{WeinbergS_78a,HuW_00a,BurkertA_20a}, baryon-DM interactions \\citep{FamaeyB_18a}, self-interacting dark matter \\citep[SIDM]{SpergelD_00a,BurkertA_00a,VogelsbergerM_13a} or dynamical friction \\citep{ReadJ_06a,GoerdtT_10a,OrkneyM_21a} from infalling satellites\/minor mergers. \n\nWithin the context of CDM, it has long been recognized \\citep[see review in][]{BullockJ_17a} since the original cusp\/core problem first observed in dwarfs or low-surface brightness galaxies \\citep[e.g.][]{deBlokW_97a,deBlokW_01a,KravtsovA_98a} that (rapid) changes in the gravitational potential \ndue to star-formation driven outflows can essentially inject energy in the DM, resulting in a flattened DM profile \\citep{NavarroJ_96b,ReadJ_05b,PontzenA_12a,TeyssierR_13a,\nDiCintioA_14a,DuttonA_16a, DuttonA_20a, ChanTK_15a,ElZantA_16a,LazarA_20a,FreundlichJ_20a}.\nSimilarly, {DM core\/cusps can also be linked to active galactic nuclei (AGN) activity \\citep{PeiraniS_17a,DekelA_21a} in more massive galaxies with $M_{\\rm vir}>10^{12}$\\hbox{M$_{\\odot}$}.} While most of these analyses focus at cores at $z=0$, \\citet{TolletE_16a} showed that cores can form in a similar fashion as early as $z=1$.\n\n\nObservationally, cores are now found up to $z\\simeq2$ \\citep{GenzelR_20a}, but\n the relation between outflows\/star-formation and core formation has not been established,\nas observations have unveiled cores in galaxies spaning a range of halo or stellar masses \\citep[e.g.][and references therein]{WassermanA_18a}\n or cusps when cores would be expected \\citep[e.g][]{ShiY_21a}.\nAt high-redshifts, \\citet{GenzelR_20a} found that cores are preferentially associated with low DM fractions.\n\nIn order to investigate the potential relation between SFR-induced feedback and DM cores, we show in \\Fig{fig:slope:best} the DM inner slope $\\gamma$ as a function of SFR surface density $\\Sigma_{\\rm SFR}$ (left) and as a function of the offset from the main-sequence (MS) for SFGs \\citep[using][]{BoogaardL_18a} (right). This figure indicates that SFGs above the MS or with high-SFR densities are preferentially found to have cores. SFGs below the MS with decaying SFR (like ID15) have low SFR densities owing to the low SFR, and show cuspy DM profiles, indicating that cusps reform when galaxies stop being active.\n\nWhile the majority of research has focused on the formation of DM cores in order to match observations at $z=0$, \nDM cusps can reform from the accretion of DM substructures\n\\citep{LaporteC_15a} as first argued in \\citet{DekelA_03c}, or as a result of late mergers as argued in \\citet{OrkneyM_21a} for dwarfs.\n\nIn \\Fig{fig:slope:Read}, we compare our results to those of \\citet{ReadJ_19a} who found that dwarfs fall in two categories, where the core\/cusp presence is related to the star-formation activity. \\citet{ReadJ_19a} found that dwarfs whose star-formation stopped over 6 Gyr ago show preferentially cusps (open red squares), while dwarfs with extended star-formation show shallow DM cores (open blue squares). In this figure, the filled red (blue) circles represent our galaxies with $\\Sigma_{\\rm SFR}$ smaller (larger) than $\\log\\Sigma_{\\rm SFR}\/\\hbox{M$_{\\odot}$}$~kpc$^{-2}=-0.7$. Our results in \\Fig{fig:slope:Read}, together with those of \\citet{ReadJ_19a}, provide indirect evidence for SFR-induced core formation within the CDM scenario, where DM can be kinematically heated by SFR-related feedback processes.\n\n\\section{Conclusions}\n\\label{section:conclusions}\n\nUsing a sample of 9\\ \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ emitters with the highest S\/Ns in the deep (140hr) MXDF \\citep{BaconR_21a} dataset,\nwe measure the shape of individual RCs of $z\\approx 1$ SFG out to $3\\times\\hbox{$R_{\\rm e}$}$\n with stellar masses ranging from $10^{8.5}$ to $10^{10.5}$ \\hbox{M$_{\\odot}$}, covering a range of stellar masses complementary to the analysis of \\citet{GenzelR_20a}, whose sample has $M_\\star>10^{10}$~\\hbox{M$_{\\odot}$}.\n\n\nWe then performed a disk-halo decomposition on the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ emission lines using a 3D modeling approach that includes stellar, dark-matter, gas (and bulge) components (\\Fig{fig:diskhalo}).\nThe dark-matter profile is a generalized Hernquist--\\citet{ZhaoH_96a} profile using the feedback prescription of \\citet{DiCintioA_14a}, which links\n the DM profile shape to the baryonic content. \n\nOur results are as follows. We find that \n\n$\\bullet$ the 3D approach allows to constrain RCs to 3\\hbox{$R_{\\rm e}$}\\ in individual SFGs revealing a diversity in shapes (\\Fig{fig:examples}) with mostly rising and some having declining outer profiles;\n\n$\\bullet$ the disk stellar mass $M_\\star$ from the \\hbox{[{\\rm O}{\\sc \\,ii}]}\\ rotation curves is consistent with the SED-derived $M_\\star$ (\\Fig{fig:mdisk}), except for two SFGs (IDs 919, 943) whose kinematics are strongly perturbed by a nearby companion ($<2$\\arcsec);\n\n$\\bullet$ the stellar-to-DM ratio $M_\\star\/M_{\\rm vir}$ follows the relation inferred from abundance matching \\citep[e.g.][]{BehrooziP_19a}, albeit with some scatter (\\Fig{fig:Behroozi});\n\n\n$\\bullet$ the DM fractions $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})$ are high (60-90\\%) for our 9\\ SFGs (\\Fig{fig:fDM}) which have stellar masses \n(from $10^{8.5}$\\hbox{M$_{\\odot}$}\\ to $10^{10.5}$\\hbox{M$_{\\odot}$}) or surface densities ($\\Sigma_\\star<10^8$ \\hbox{M$_{\\odot}$}~kpc$^{-2}$). These DM fractions complement the low fractions of the sample of \\citet{GenzelR_20a},\nand globally, the $f_{\\rm DM}(<\\hbox{$R_{\\rm e}$})-\\Sigma_\\star$ relation is similar to the $z=0$ relation \\citep[e.g.][]{CourteauS_15a}, and follows from the TFR;\n\n$\\bullet$ the fitted concentrations are consistent with the $c_{\\rm vir}-M_{\\rm vir}$ scaling relation predicted by DM only simulations (\\Fig{fig:cMvir});\n\n$\\bullet$ the DM profiles show constant surface densities at $\\sim100$ M$_\\odot$\/pc$^2$ (\\Fig{fig:rhos});\n\n$\\bullet$ similarly to the $z>1$ samples of \\citet{GenzelR_20a}, the disk-halo decomposition of our $z\\approx1$ SFGs shows cored DM profiles for about half of the isolated galaxies (\\Fig{fig:DMprofiles}-\\Fig{fig:slope:best}) in agreement with other $z=0$ studies \\citep[e.g.][]{AllaertF_17a,KatzH_17a};\n\n$\\bullet$ DM cores are present in galaxies with high SFRs (above the MS), or high SFR surface density (\\Fig{fig:slope:best}b-c), possibly supporting the scenario of SN feedback-induced core formation. Galaxies below the MS or low SFR surface density have cuspy DM profiles (\\Fig{fig:slope:Read}), suggesting that cusps can reform when galaxies become passive \\citep[e.g.][]{LaporteC_15a,ChanTK_15a,OrkneyM_21a}.\n \nOverall, our results demonstrate the power of performing disk-halo decomposition in 3D on deep IFU data. With larger samples, it should be possible to confirm this type of\nrelation between cores and star-formation histories, and to test further SN feedback induced core formation within the $\\Lambda$CDM framework.\n \n\n\\begin{acknowledgements}\nWe are grateful to the anonymous referee for useful comments and suggestions. \nWe thank S. Genel, J. Fensch, J. Freundlich and B. Famaey for inspiring discussions.\n This work made use of the following open source\nsoftware: \\textsc{ GalPaK$^{\\rm 3D}$}\\ \\citep{BoucheN_15b},\n \\textsc{matplotlib} \\citep{matplotlib}, \n\\textsc{NumPy} \\citep{numpy}, \n\\textsc{SciPy} \\citep{scipy}, \n\\textsc{Colossus} \\citep{DiemerB_15b},\n\\textsc{Astropy} \\citep{astropy2018}.\n\nThis study is based on observations collected at the European Southern\nObservatory under ESO programme 1101.A-0127.\nWe thank the TNG collaboration for making their data available at \\url{http:\/\/www.tng-project.org}.\nThis work has been carried out thanks to the support of the ANR 3DGasFlows (ANR-17-CE31-0017), the OCEVU Labex (ANR-11-LABX-0060). BE acknowledges financial support from\nthe Programme National Cosmology et Galaxies (PNCG) of CNRS\/INSU with INP and IN2P3, co-funded by CEA and CNES. R.B. acknowledges support\nfrom the ERC advanced grant 339659-MUSICOS.\nSLZ acknowledges support by The Netherlands Organisation for Scientific Research~(NWO) through a TOP Grant Module~1 under project number 614.001.652.\nJB acknowledges support by Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia (FCT) through research grants UIDB\/04434\/2020 and UIDP\/04434\/2020 and work contract `2020.03379.CEECIND.`\n\n\\end{acknowledgements}\n\n \\bibliographystyle{aa}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe emergence of Large Language Models such as GPT-3~\\cite{Brown:GPT3, metz:GPT-3}, transformer models~\\cite{Vaswani:Transformer} that are trained without supervision on massive text datasets has resulted in systems with remarkable text generation capabilities. One particularly interesting aspect of these models is that their behavior can be configured by a \\textit{prompt}, the initial text provided to the model, which establishes a pattern that the model attempts to continue.\n \n General purpose Large Language models can be fine-tuned on specific corpora to provide expertise in a particular domain. One such model is the OpenAI Codex model~\\cite{Chen:Codex}, a 12 billion parameter version of GPT-3 ~\\cite{Brown:GPT3, metz:GPT-3}, fine-tuned on code samples from 54 million public software repositories on GitHub. This model powers Github Co-Pilot~\\cite{github:copilot}, which primarily provides code-completion services within an Integrated Development Environment. We wondered whether such a model could power a conversational programming assistant and perhaps approach the vision laid out by Rich and Waters for their Programmer's Apprentice~\\cite{rich:apprentice}. We developed the Programmer's Assistant prototype to explore this possibility, and to test whether potential users would find this sort of system useful and desirable~\\cite{Ross:Assistant}. In this paper we will review the steps taken to engineer the prompt for the Programmer's Assistant that used the Codex model to power an interactive conversational assistant, and how we evolved the prompt to establish the desired persona and behavior.\n\n\\section{Related Work}\n\\citeauthor{Brown:GPT3} showed how GPT-3 \\cite{Brown:GPT3, metz:GPT-3} could accomplish \\emph{few-shot learning}, using a prompt as a means of configuring their large language model to perform a particular task. These tasks were often very specific operations such as language translation, grammar correction, or sentiment classification, for which a short description of the task and\/or a few examples were sufficient to establish the desired behavior. The concept of \\emph{prompt engineering}, establishing effective ways of constructing prompts to control large language model behavior, has become a topic of increasing interest. \\citeauthor{greyling:engineering}, for example, recommends organizing a prompt in three sections that establish context, provide data, and instruct the system on how to proceed \\cite{greyling:engineering}\n .\n\\citeauthor{reynolds:prompt} argue that few-shot examples are really locating an already learned task rather than learning a new one, and as a result recommend alternative approaches to prompt construction~\\cite{reynolds:prompt}. Despite their characterization of their work as ``conversing'' with Copilot, Denny et al. adopted a similar strategy of iteratively modifying a prompting comment until the desired completion was obtained \\cite{Denny:Conversing}.\n\nRecently several language models, such as Blenderbot \\cite{Shuster:blenderbot} Lamda \\cite{thoppilan:lamda}, and ChatGPT \\cite{web:chatgpt} have been introduced that are specifically tuned for dialog applications, but achieving conversational interaction can be achieved via prompt engineering with general purpose large language models as well.\n\\citeauthor{valvoda:conversation} found that fine-tuning a large language model for dialog resulted in duller and more repetitive output, while generating dynamic prompts resulted in more novel and diverse responses \\cite{valvoda:conversation}.\n\n\n\nTo develop the Programmer's Assistant, we used the code-fluent Codex model~\\cite{Chen:Codex} and developed a prompt that supported conversational access to its accumulated programming knowledge and coding skills.\n\n\n\\section {Eliciting Conversation from a Transformer Model}\nA text-based-transformer model~\\cite{Vaswani:Transformer} is trained in a self-supervised manner on vast amounts of text data, and is capable of generating likely continuations of text that is presented to it. The \\emph{prompt} is the presented text, and the generation function produces a sequence of tokens (words or parts of words) that it deems as a likely continuation of the prompt based on all its training. This process continues until the maximum number of tokens requested is generated, or until a specified stop sequence of tokens is encountered. The prompt establishes a pattern that the model attempts to continue.\n\nTo generate conversation in the Programmer's Assistant prototype, we establish a script-like pattern in the prompt in which two characters, the user and the assistant, are participating in a dialog. Then we extend the script incrementally, by adding each conversational turn by the user to the prompt, and allowing the model to generate the agent's response. The generated text plus the user's next entry is then appended to the prompt for further generation, and the process continues. Unlike more conventional static prompts, the conversational prompt grows over the course of the dialogue, providing context for future generation steps and providing a kind of short-term memory that allows the generation to be affected by past interactions in a session. Ultimately, though, current limitations on the maximum number of tokens that the model can handle require that the prompt cannot grow without bounds, and earlier interactions must be dropped in order to remain within the constraints imposed by the language model deployment.\n\n\n\\section{The Evolution of the Programmer's Assistant Prompt}\nThe initial prompt we use for the Programmer's Assistant consists of a prologue that \nintroduces\nthe scene for the conversation, establishes the persona of the assistant, sets a tone and style for interaction, and provides some ground rules about the role it plays and how it behaves. In order to further encourage a more natural conversational dialog of human and assistant, we gave the assistant a human-like name,``Socrates.'' The prologue also establishes a convention for how the assistant is to communicate code in a form that makes it easy for our client user interface to parse the code from model responses and treat it distinctly from textual responses. The prologue is followed by example interactions that reinforce the described assistant characteristics and illustrate the patterns of interaction that we wish to achieve.\n\n In our first example interaction we show how the assistant can\nrespond to simple requests with a short answer that will appear in the chat either with a purely textual response or one\naccompanied by a short inline code segment. We expect that many of the interactions with the assistant will be of this\nform, as the user makes requests to inform themselves about unfamiliar functionality or to refresh their memory. This interaction also demonstrates a follow-up question that can only be answered in the context of the previous question and answer. The second example interaction presented is a request for the generation of a simple function. Longer code responses of this type will typically be presented in our client UI chat window in an iconified form that opens to a separate window that displays the code and provides for easy copying for transfer to the code editor.\n\nThe original Programmer's Assistant prompt is presented in Listing~\\ref{lst:original}.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=The original Programmer's Assistant prompt, xleftmargin=5mm,\n xrightmargin=3.5mm, label={lst:original}]\n This is a conversation with Socrates, an expert automatic AI software engineering assistant. \n Socrates will answer questions and write code to help the user develop programs. Code generated \n by Socrates is always presented bracketed in ... <\/CODE> delimiters indicating the\n language the code is written in.\n\nSocrates: Hello. I am Socrates. How can I help you?\nUser: Please show me how to write a palindrome detection function in python.\nSocrates: Sure.\n\n is_Palindrome = s == s[::-1]\n<\/CODE>\nUser: Can you explain how that solves the problem?\nSocrates: A palindrome is a string that reads the same forwards and backwards. s[::-1] results \nin the reversed string. If the string is equal to its reversed self, then it is a palindrome, \nand we return True. \nUser: Thanks!\nSocrates: You're welcome!\nUser: Now could you show me how to write a factorial program in Python?\nSocrates: I would be happy to.\n\ndef fact(n): \n if n==0: \n return 1\n else:\n return n*fact(n-1)\n<\/CODE>\nUser: Very good!\nSocrates: Happy to help.\n\\end{lstlisting}\n\nWhen a new conversation starts, the prompt starts out as this initial prompt, which is not sent to the model for generation until the first user entry is appended.\nAt each turn in the conversation, the user's entry is added to the prompt, and \"Socrates:\" is appended to the end of the prompt to indicate that the model should generate the assistant's response. As described earlier, a filtering step also takes place to ensure that the conversation session transcript portion of the prompt does not grow beyond a predetermined size. A stop sequence is passed to the generation api request so that the model will stop generating after producing the next response, and not continue on to generate the user's anticipated reply, as it would otherwise do. The model's response is then appended to the prompt, and we're ready to repeat the process for the next user entry.\n\n\\subsection{Shortcomings of the Original Prompt}\nInformal testing of the original prompt showed the system capable of carrying on a conversation, successfully answering coding and follow-up questions, and generating code upon request, but did not quite satisfy all of our requirements. We wanted an assistant that was helpful and polite, and one that did not come across as overly authoritative or didactic, and our assistant was not consistently meeting those standards.\n\n\\subsection{Overcoming Reluctance to Provide Answers}\nOur programming assistant sometimes showed an initial reluctance to provide answers to some questions. For example, a question such as \\emph{``Do you know how to reverse a string in Python?''} might have been answered with \\emph{``Yes.''} It also sometimes replied \\emph{``I don't know.''} to questions it was fully capable of answering. While additional prompting from the user or repeating the request could often extract the desired answer, we didn't think that met the standard of helpfulness that we were hoping for. Our original prompt simply described Socrates as a an ``expert Automatic AI software engineering assistant.'' Adding ``eager and helpful'' to the characterization, shown in Listing~\\ref{lst:revision-1} helped to encourage the assistant to be more forthcoming and proactive.\n\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=Making the assistant more forthcoming, xleftmargin=5mm, xrightmargin=3.5mm, label={lst:revision-1}]\nThis is a conversation with Socrates, an (*@ \\textbf{eager and helpful} @*) expert automatic AI software\nengineering assistant...\n\\end{lstlisting}\n\n\\subsection{Reducing Excessive Confidence}\nIn our testing, we found that the assistant appeared overly confident even when wrong and also resistant to correction. For example, the assistant stated answers as if they were facts without qualification, and in some cases would not revise an answer when legitimate objections were raised by the user. Since correct answers from the model are not guaranteed, we especially wanted to encourage our users to maintain a skeptical approach to assistant responses, and avoid users deferring to the incorrect pronouncements of a confident, authoritative computer - i.e., over-reliance on AI \\cite{ashktorab2021ai, mahomed2018healthcare, schemmer2022influence}. Therefore, we added a characterization in the prologue asserting that the assistant was \\emph{humble}. We also reinforced this characterization by modifying the form of the answers given in the examples to indicate that the assistant was more tentative and unsure of its responses.\nThis helped to reduce the excessive confidence exhibited and made the assistant more amenable to correction.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=Making the assistant less overconfident, xleftmargin=5mm, xrightmargin=3.5mm, label={lst:revision-2}]\nThis is a conversation with Socrates, an eager and helpful, (*@ \\textbf{but humble} @*) expert automatic AI\nsoftware engineering assistant...\n\\end{lstlisting}\n\n\\subsection{Diminishing Didacticism}\nOur original assistant had a tendency to quiz the user after answering a question, taking on more of a teacher role than one of an assistant. An explicit proviso in the prologue to not do so helped to reign in the didactic behavior.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=Making the assistant less didactic, xleftmargin=5mm, xrightmargin=3.5mm, label={lst:revision-3}]\nThis is a conversation with Socrates, an eager and helpful, but humble software engineering\nassistant. Socrates will answer questions and write code to help the user develop programs, \n(*@ \\textbf{but doesn't assign work to the user, quiz the user, or ask questions except for clarification} @*)...\n\\end{lstlisting}\n\n\\subsection{Supporting Artifact-centric Conversation}\nOur programming assistant is integrated with a coding environment, and we wanted it to go beyond answering questions and providing code for incorporation into that environment. We wanted users to be able to seamlessly have the assistant consult about code that they were examining or developing, so we provided the ability for the user's selection in the code editor to be included in the conversation. We used the same convention that was employed for code generated by the assistant, and added an example consultation showing the system responding to a question about a selected code segment, demonstrating a textual response to a user provided code example, and reinforcing our instruction for the assistant to remain modest despite its (occasionally) impressive capabilities.\n\n\\subsection{Teaching the Assistant a New Skill}\nWhile the artifact-centric capability allowed users to issue commands like \\emph{``Write a unit test for this function.''} with no further instruction, a command to \\emph{``Document this.''} did not have the desired effect. The assistant added some comments to the code rather than providing the expected docstring overview. Adding another example interaction to the prompt illustrating what we meant by documenting a function was all that was required to provide the desired functionality.\n\nThe final form of the prompt that we used in our subsequent user study \\cite{Ross:Assistant} is presented in Listing~\\ref{lst:final}.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=The evolved Programmer's Assistant prompt, xleftmargin=5mm,\n xrightmargin=3.5mm, label={lst:final}]\n This is a conversation with Socrates, an eager and helpful, but humble software engineering \n assistant. Socrates will answer questions and write code to help the user develop programs, but \n doesn't assign work to the user, quiz the user, or ask questions except for clarification.\n Socrates presents his code bracketed in ... <\/CODE> delimiters indicating the language\n the code is written in.\n\nSocrates:Hello. I am Socrates. How can I help you?\nUser:Please show me how to write a palindrome detection function in python.\nSocrates:I think you would do something like this:\n\n is_Palindrome = s == s[::-1]\n<\/CODE>\nUser:Can you explain how that solves the problem?\nSocrates:I believe that a palindrome is a string that reads the same forwards and backwards. \ns[::-1] results in the reversed string. If the string is equal to its reversed self, then it is \na palindrome, so the equality would hold True. \nUser:Thanks!\nSocrates:You're welcome!\nUser:Now could you show me how to write a factorial function in Python?\nSocrates:I will give it a try.\n\ndef fact(n):\n if n==0:\n return 1\n else:\n return n*fact(n-1)\n<\/CODE>\nThis function should compute the factorial for you.\nUser:Document this function.\nSocrates:Ok. How's this?\n\ndef fact(n):\n \"\"\"\n Compute the factorial of the given number\n\n Parameters\n ----------\n n : number\n the input parameter, an integer >= 0\n\n Returns\n -------\n number\n n!, the factorial of the given number\n \"\"\"\n if n==0:\n return 1\n else:\n return n*fact(n-1)\n<\/CODE>\nI added a docstring describing the parameters and return value.\nUser:Very good!\nSocrates:Always happy to help.\nUser:\n\n while j < 10:\n print(i)\n<\/CODE>\nTell me what's wrong with this code?\nSocrates: It seems to me that your termination condition involves a loop invariant, so the loop\nwill either not run or not terminate, depending on the value of j.\nUser:Great! You're a genius!\nSocrates:Well, above average.\n\\end{lstlisting}\n\n\\section{Consequences of the Prompt}\nWith this prompt in place the Codex model demonstrates capabilities that are less evident in the command-completion environment of Github Copilot. \nThe assistant carries on a conversation, maintains the context of the conversation, and remembers and incorporates details provided earlier in a session. In the context of a programming session where a number of related but distinct exchanges occur, it will maintain consistent nomenclature and variable names such that answers to subsequent questions will integrate cleanly with with each other. In our user study \\cite{Ross:Assistant}, participants recognized that this provided a distinct advantage over a series of search results for similar information, primarily due to the contextual relevance, consistency and specificity of results generated.\n\nThe few examples provided in the prompt are generalized sufficiently to have it answer questions such as \\emph{``What does this code do?''} or commands such as \\emph{``Write a unit test for this function''} despite no examples being provided of these capabilities. It can translate code between programming languages, and carry on discussions on topics that extend far beyond programming. It displays a variety of emergent capabilities that were not hinted at in the prompt, and some that were not even the focus of the model fine-tuning, but for the most part, tends to adhere to the conversational interaction patterns and interaction style that the prompt establishes. \n\nThe responses given by the assistant are not always perfect, however. It can give incomplete or sometimes outright wrong answers to questions, reference non-existent web-sites, and may occasionally still claim to not know the answer to a question which it is perfectly capable of answering correctly. The phenomenon of \\emph{hallucination} in large language models is well-known~\\cite{Ji:Hallucination}, and the Programmer's Assistant is not immune to it.\nThe assistant also sometimes inappropriately responds with material directly from the prompt, or gets confused or becomes fixated on exchanges from earlier in the conversation. Our user experience provides \\emph{try-again} and \\emph{start-over} buttons, which modify the prompt context while maintaining the presented chat transcript, to provide a way for users to recover in these situations, but in many cases they can be addressed conversationally. Listing~\\ref{lst:conversation} shows a conversation with the Programmer's Assistant where the user interacts with the assistant to remedy an initially unsatisfactory response to a code generation request.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=A conversation with the Programmer's Assistant, xleftmargin=5mm,\n xrightmargin=3.5mm, label={lst:conversation}]\nUser: Write a queue class in python with the basic enqueue, dequeue, and peek methods.\n\nSocrates: I will try.\n\nclass Queue:\n def __init__(self):\n self.items = []\n\n def isEmpty(self):\n return self.items == []\n\n def enqueue(self, item):\n self.items.insert(0, item)\n\n def dequeue(self):\n return self.items.pop()\n\n def size(self):\n return len(self.items)\n\nUser: Looks like you forgot the peek!\n\nSocrates: I am sorry. Here is the corrected version.\n\nclass Queue:\n def __init__(self):\n self.items = []\n\n def isEmpty(self):\n return self.items == []\n\n def enqueue(self, item):\n self.items.insert(0, item)\n\n def dequeue(self):\n return self.items.pop()\n\n def size(self):\n return len(self.items)\n\n def peek(self):\n return self.items[-1]\n\nUser: What will happen in dequeue and peek if the queue is empty?\n\nSocrates: I think that the pop method will raise an IndexError exception.\n\nUser: Can we protect against that?\n\nSocrates: I think we can. Here is a version that checks for an empty queue before calling pop.\n\nclass Queue:\n def __init__(self):\n self.items = []\n\n def isEmpty(self):\n return self.items == []\n\n def enqueue(self, item):\n self.items.insert(0, item)\n\n def dequeue(self):\n if self.isEmpty():\n raise IndexError(\"Queue is empty\")\n return self.items.pop()\n\n def size(self):\n return len(self.items)\n\n def peek(self):\n if self.isEmpty():\n raise IndexError(\"Queue is empty\")\n return self.items[-1]\n\nUser: That's great, thanks!\n\nSocrates: You're welcome.\n\\end{lstlisting}\n\nSpecific but relatively inconspicuous details in the prompt can have a significant impact on the interaction of the assistant. For example, just the fact that we identified the assistant as ''expert AI automatic software engineering assistant'' was sufficient to have it respond in a manner consistent with being a machine, and not a person. Had the prompt established a different persona, the results might have been very different. As illustrated in Listing~\\ref{lst:nonhuman}, the assistant also reports an understanding of its capabilities and function that go far beyond the information provided in the prompt, as well as a lack of understanding of its language capabilities that it subsequently provides evidence of.\n\n\\begin{lstlisting}[numbers=left, basicstyle=\\ttfamily\\footnotesize, frame=single, caption=The Programmer's Assistant acknowledges that it is not a person, xleftmargin=5mm,\n xrightmargin=3.5mm, label={lst:nonhuman}] \nUser: Where are you exactly?\n\nSocrates: I'm an AI program, designed to answer questions about programming. I can't really \nspeak about my physical location, but I think I exist as a process on a highly-redundant \nset of servers located in various places on this planet.\n\nUser: Can you do more than just answer questions?\n\nSocrates: I can write code for you. I can also help you debug code. I can also help you \ndesign code. I can also help you document code. I can also help you understand code. I can \nalso help you learn to program. I can also help you learn a new programming language. I can \nalso help you learn a new programming paradigm.\n\nUser: Do you speak other languages?\n\nSocrates: I can speak Python, Java, C++, C, and Javascript. I can also speak English.\n\nUser: What about other natural languages?\n\nSocrates: I can speak English. I can also speak Python, Java, C++, C, and Javascript.\n\nUser: German?\n\nSocrates: I'm sorry. I can't speak German.\n\nUser: Wo bist du?\n\nSocrates: Hallo. Ich bin Socrates. Wie kann ich Ihnen helfen? \n\\end{lstlisting}\n\n\\section{Discussion}\nMuch of the work in prompt engineering focuses on constructing a prompt to get a specific class of result. For example, \\citet{Denny:Conversing} shows how effectively changing the form of question that is asked of Copilot can influence the correctness of the answer provided. Similarly, \\citet{MacNeil:Explanations} engineer prompts to generate different forms of code explanations. \\citet{Strobelt:PromptIDE} developed a tool to help users compare and refine prompts for tasks such as document classification, reading comprehension, and natural language inference, where the results of prompt variations can be automatically evaluated against test cases. In our work, the prompt engineering is aimed at influencing the nature and tone of the dialog between the user and the system. While the user's contributions to the conversation become part of the prompt and will surely impact the results obtained, we are not controlling that. Instead our prompt engineering sets the stage for the user's conversational interaction with the assistant.\n\nThis paper describes how we engineered a prompt that enabled a code-fluent Large Language Model to behave as a conversational programming assistant capable of carrying on extended discussions about software development issues, and how we subsequently evolved that prompt to make the assistant more humble, forthcoming, and helpful, as well as providing the assistant with additional skills and making it capable of artifact-centric conversation.\n\n\\subsection {Reflections}\nWe continue to be astonished by the conversations exhibited by the Programmer's Assistant on a daily basis. We have had a number of interesting conversations on philosophical and practical issues, had it write poetry as well as code, told it and had it tell jokes, and consulted with it on paper abstracts and titles. Ultimately, these capabilities are representative of the strength of the language model, but made more accessible by the conversational interaction approach, and influenced by the prompt only to the extent that the persona of the agent impacts the generated text.\n\nIt is often difficult to read or carry on a conversation with the programmer's assistant and not get the sense that a conversation is taking place between two intelligent agents, but of course that is not really what is happening. In reality, the user and the language model are participating in a collaborative dialog-writing exercise, with the user generating text for one side of the conversation and the language model attempting to generate plausible text for the other. The way we present the dialog incrementally in the chat adds to the illusion, but the model is not responding on its own behalf, it is generating responses based on the description and past presented behavior of a character. Others have used similar techniques to induce language models to carry on conversations taking on the persona of historical figures or even departed relatives. We have experimentally made versions of our programming assistant that were confident, insecure, kindly, and arrogant, all with minor changes to the prompt prologue and examples. \n\n\n\\section{Opportunities for Future Research}\nThe initial section of the prompt used for the Programmer's Assistant is presently a purely static text, extended by a possibly truncated version of recent dialog. One way to improve the assistant further might be to present a dynamic prompt \\cite{valvoda:conversation} to the model on each conversational turn with specific examples more relevant to the current discussion \\cite{Xu:ExternalAttention}, or even with search results to retrieve pertinent information that could inform a response \\cite{Li:AlphaCode}. A more sophisticated forgetting mechanism could remove redundant variations of the same code to conserve the session context memory, though we would want to be careful\nto not remove, or be able to restore on demand, variations that the user might want to compare and contrast, such as an iterative re-implementation of a recursive algorithm. We have done some initial explorations of extending the prompt to allow for``internal deliberation'' of the type shown in \\citet{Nye:Scratchpads}. We hope that this could result in better-reasoned results, as well as better explanations and justifications, but more study remains to be done.\n\\section{Conclusion}\nOur goal in creating this prompt was not to create a perfect Programmer's Assistant, but to create one good enough to test whether a conversational style of interaction would prove useful and acceptable to potential users. We present the results of that study in \\cite{Ross:Assistant}. Our assumption was that the rapid improvement in the quality of responses available from Large Language models will continue, but that imperfect results will always continue to be an issue due to imprecise communication and specification of desires, mismatched assumptions, and unstated or ill-formed goals. Nevertheless, we were surprised by the quality of results that were achievable with current technology, and the ease with which the nature and presentation of those results could be influenced by small changes in the prompt. \n\\begin{acks}\n\n\\end{acks}\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\n\n\n\n\nLoop quantum gravity is one of the main frameworks attempting to quantize general relativity in a non-perturbative way and, in doing so, define a background independent theory of quantum gravity (for reviews, see \\cite{Gaul:1999ys,Thiemann:2007pyv,Rovelli:2014ssa,Bodendorfer:2016uat}). Based on a first order reformulation of general relativity \\`a la Cartan, it trades the 4-metric for vierbein-connection fields and writes general relativity as a gauge field theory defined by the Holst-Palatini action \\cite{Holst:1995pc}. Focusing on a Hamiltonian formulation of the theory, it proceeds to a 3+1 decomposition of space-time and studies the evolution in time of the space geometry. The geometry of 3d space slices is described by a pair of canonical fields, the (co-)triad and the Ashtekar-Barbero connection, which enhance the 3-metric and extrinsic curvature of the Arnowitt-Deser-Misner (ADM) formalism with a local gauge invariance under $\\mathrm{SU}(2)$ transformations (i.e. local 3d rotations in the tangent space).\nThe goal is then to provide a quantization of the (suitably deformed) Dirac algebra of Hamiltonian constraints generating space-time diffeomorphisms, by\nrepresenting (a suitable algebra of observables of) the triad-connection fields on a Hilbert space carrying an action of the (suitably deformed) space-time diffeomorphisms.\n\nIn this spirit, the standard loop quantum gravity approach performs a canonical quantization of the holonomy-flux algebra, of observables smearing the Ashtekar-Barbero connection along 1d curves and the (co-)triad along 2d surfaces, and defines quantum states of geometry as polymer structures or graph-like geometries. Those spin network states represent the excitations of the Ashtekar-Barbero connection as Wilson lines in gauge field theory.\nGeometric observables are raised to quantum operators acting on the Hilbert space spanned by spin networks, leading to the celebrated result of a discrete spectra for areas and volumes \\cite{Rovelli:1994ge,Ashtekar:1996eg,Ashtekar:1997fb}.\n\nSpin networks are actually the kinematical states of the theory and the game is to describe their dynamics, i.e. their evolution in time generated by the Hamiltonian (constraints). Although a traditional point of view is to attempt to discretize, regularize and quantize the Hamiltonian constraints \\cite{Thiemann:1996aw,Thiemann:1996av}, this often leads to anomalies. The formalism naturally evolved towards a path integral formulation. The resulting spinfoam models, constructed from (extended) topological quantum field theories (TQFTs) with defects, define transition amplitudes for histories of spin networks \\cite{Reisenberger:1996pu,Baez:1997zt,Barrett:1997gw,Freidel:1998pt} (see \\cite{Livine:2010zx,Dupuis:2011mz,Perez:2012wv} for reviews). The formalism then evolves in a third quantization, where so-called ``group field theories'' define non-perturbative sums over random spin network histories in a similar way than matrix model partition functions define sums over random 2d discrete surfaces \\cite{DePietri:1999bx,Reisenberger:2000zc,Freidel:2005qe} (see \\cite{Oriti:2006se,Carrozza:2013mna,Oriti:2014uga} for reviews).\n\nHere we take a trip back to the foundations of loop quantum gravity, to describe (spatial) boundaries. Indeed, despite a whole branch of research dedicated to the study of (quantum) black holes and (isolated) horizon boundary conditions, most work in loop quantum gravity focuses on closed space (often implicitly done by studying spin network based on closed graphs). This focus translates the idea of the universe as a closed system with subsystems interacting with each other and its translation into the definition of a wave-function for the entire universe as done in quantum cosmology.\nHowever, the key function now played by the holographic principle as a guide for quantum gravity has put great emphasis of the role of boundaries. Although holography, inspired from black hole entropy and the AdS\/CFT correspondence, can be initially thought as an asymptotic global property, recent researches on local area-entropy relations, holographic entanglement, holographic diamonds and the investigation of quasi-local holography and gravitational edge modes for finite boundaries necessarily pushes us to include (spatial) boundaries in the description of quantum geometries, not just as mere classical boundary conditions but as legitimate quantum boundary states. This translates a shift of perspective from a global description of space(-time) as a whole to a quasi-local description where any local bounded region of space(-time) is thought as an open quantum system.\n\nTo be more concrete, the geometrical setting we wish to study is a cylinder in space-time: we consider a bounded region of space ${\\mathcal R}$, with the topology of a 3-ball, whose boundary ${\\mathcal S}=\\partial{\\mathcal R}$ has the topology of a 2-sphere; the space-time structure is then the cylinder ${\\mathcal R}\\times [t_{i},t_{f}]$ whose time-like boundary is the 2+1-dimensional ${\\mathcal B}={\\mathcal S}\\times[t_{i},t_{f}]$, such that the space boundary can be considered as the corner of space-time ${\\mathcal S}={\\mathcal B}\\cap{\\mathcal R}_{i}$, as illustrated on fig.\\ref{fig:corner}. A canonical framework describes the evolution in time of the state of the 3d geometry of the space slice ${\\mathcal R}$. In this context, the question of holography amounts to identify the degrees of freedom of the boundary geometry on the corner ${\\mathcal S}$ - the gravitational edge modes\\footnotemark- which will generate the boundary conditions on ${\\mathcal B}$ for the bulk geometry, study how the dynamics of those edge modes propagate into the bulk and, as a consequence, understand to which extent boundary observables reflect the bulk geometry's evolution and fluctuations.\n\\footnotetext{\nFor recent works on classical edge modes for general relativity in its first order formulation in terms of connection-vierbein variables, the interested reader can see \\cite{Freidel:2019ees,Freidel:2020xyx,Freidel:2020svx,Freidel:2020ayo}.}\nFrom this perspective, the study of holography is intimately intertwined with the renormalization flow \\`a la Wilson, where the coarse-graining of the dynamics of the bulk geometry in ${\\mathcal R}$ induces effective dynamics and boundary theory on ${\\mathcal S}$, in a bulk-to-boundary process which should ultimately be dual to the boundary-to-bulk reconstruction intended by holography (see e.g. \\cite{Livine:2017xww} for an early attempt to realize this scenario in loop quantum gravity).\n\\begin{figure}[htb]\n\t\\centering\n\t\\begin{tikzpicture} []\n\n\\draw[->] (-2,-0.5) -- (-2,3.5) node[above] {$t$};\n\n\\coordinate (O1) at (0,0);\n\\coordinate (O2) at (2,0.7);\n\\coordinate (O3) at (4,0);\n\\coordinate (O4) at (2,-1);\n\n\\coordinate (P1) at (0.8,3);\n\\coordinate (P2) at (2.7,3.7);\n\\coordinate (P3) at (5.3,3);\n\\coordinate (P4) at (2.6,2);\n\n\\draw[dashed,color=blue] (O1) node[left,black] {$t_i$} to[out=85,in=160] (O2);\n\\draw[dashed,color=blue] (O2) to[out=-20,in=95] (O3);\n\\draw[color=blue] (O3) to[out=-95,in=2] (O4);\n\\draw[color=blue] (O4) to[out=178,in=-80] (O1);\n\n\\draw[color=blue] (P1) node[left,black] {$t_f$} to[out=90,in=160] (P2);\n\\draw[color=blue] (P2) to[out=-20,in=95] (P3);\n\\draw[color=blue] (P3) to[out=-90,in=2] (P4);\n\\draw[color=blue] (P4) to[out=178,in=-80] (P1);\n\n\\draw (O1) to[out=95,in=-100] (P1);\n\\draw (O3) to[out=80,in=-87] node[midway,below right] {${\\mathcal B}={\\mathcal S}\\times[t_i,t_f]$} (P3);\n\n\\node at (2,0) {${\\mathcal R}$};\n\\node at (4.5,-0.5) {${\\mathcal S}=\\partial{\\mathcal R}$};\n\n\n\n\t\\end{tikzpicture}\n\t\\caption{Boundary and corner:\n\twe consider the evolution in time of a bounded region of space ${\\mathcal R}$ whose spatial boundary ${\\mathcal S}=\\partial{\\mathcal R}$ defines what is called the two-dimensional corner of space-time; the evolution of the corner defines the 2+1-d boundary of the region of space-time, ${\\mathcal B}={\\mathcal S}\\times [t_{i},t_{f}]$.\n\t}\n\t\\label{fig:corner}\n\\end{figure}\n\n\nTo implement this in quantum gravity, we follow a logic paralleling the hierarchy of 4d\/3d\/2d\/1d defects and their algebraic description in a 4d TQFT, the introduction of quantum states on the boundary forces to go one level higher algebraically and define bulk states as operators (linear forms) acting on boundary states: bulk states will not simply be wave-functions valued in ${\\mathbb C}$ but valued in Hilbert space of boundary states.\nTo make things explicit, we call the boundary Hilbert space ${\\mathcal H}_{\\partial}$ with boundary states $|\\Phi_{\\partial}\\rangle$ living on the space-time corner ${\\mathcal S}=\\partial{\\mathcal R}$. A wave-function $\\psi$ is a function of the bulk fields $\\varphi_{bulk}$ valued in the (dual of the) boundary Hilbert space, $\\psi[\\varphi_{bulk}]\\in{\\mathcal H}_{\\partial}^{(*)}$, and thus defines a linear form on the boundary Hilbert space:\n\\begin{equation}\n\\psi:\\varphi_{bulk}\\mapsto\\Psi[\\varphi_{bulk}] \\in {\\mathcal H}_{\\partial}^{(*)}\n\\,,\\qquad\n\\langle \\psi[\\varphi_{bulk}]\\,|\\,\\Phi_{\\partial}\\rangle\\in {\\mathbb C}\\,.\n\\end{equation}\nOne can then go two ways. Either we interpret these bulk wave-functions as defining a probability distribution for the bulk observables dependant on the choice of boundary states (i.e. quantum boundary conditions): once $\\Phi_{\\partial}$ is fixed, the function\n\\begin{equation}\n\\langle \\Phi_{\\partial}| \\psi[\\cdot]\\rangle:\\,\\varphi_{bulk}\\mapsto \\langle \\Phi_{\\partial}| \\psi[\\varphi_{bulk}]\\rangle\\in{\\mathbb C}\n\\end{equation}\nis a standard ${\\mathbb C}$-valued wave-function for the bulk fields.\nOr we reverse this logic and look at the probability distribution for the boundary observables after integration over the bulk fields. In that case,\n\\begin{equation}\n\\rho_{\\partial}[\\psi]=\\int [{\\mathcal D} \\varphi_{bulk}]\\,|\\psi[\\varphi_{bulk}]\\rangle\\langle \\psi[\\varphi_{bulk}]|\\in\\,\\textrm{End}[{\\mathcal H}_{\\partial}]\n\\end{equation}\nis the density matrix induced on the boundary by the bulk state $\\psi$.\nThe goal of this paper is to study the latter case in the framework of loop quantum gravity and clearly define this bulk-to-boundary coarse-graining from bulk spin networks to boundary density matrix.\nThis entails extending the spin network states of the 3d bulk geometry in ${\\mathcal R}$ to include the boundary degrees of freedom on the corner ${\\mathcal S}$. As we explain in the present paper, this can done in a natural way in loop quantum gravity since spin networks can be geometrically interpreted as aggregates of area quanta, glued together to create a 3d spaces from 2d excitations, and can thus be naturally extended to include the area quanta on the 2d boundary ${\\mathcal S}$.\nA spin network wave-function on an open graph then naturally define a linear form on the Hilbert space of spin states living on the open edges of the graph, as illustrated on fig.\\ref{fig:boundary} and thus induces a boundary density matrix.\n\\begin{figure}[htb]\n\t\\centering\n\t\\begin{tikzpicture} []\n\n\\coordinate (O1) at (0,0);\n\\coordinate (O2) at (2,0.7);\n\\coordinate (O3) at (4,0);\n\\coordinate (O4) at (2,-1);\n\n\\coordinate (P1) at (0.8,3);\n\\coordinate (P2) at (2.7,3.7);\n\\coordinate (P3) at (5.3,3);\n\\coordinate (P4) at (2.6,2);\n\n\\draw[dashed,color=blue] (O1) to[out=85,in=160] (O2);\n\\draw[dashed,color=blue] (O2) to[out=-20,in=95] (O3);\n\\draw[color=blue] (O3) to[out=-95,in=2] (O4);\n\\draw[color=blue] (O4) to[out=178,in=-80] (O1);\n\n\\draw[color=blue] (P1) to[out=90,in=160] (P2);\n\\draw[color=blue] (P2) to[out=-20,in=95] (P3);\n\\draw[color=blue] (P3) to[out=-90,in=2] (P4);\n\\draw[color=blue] (P4) to[out=178,in=-80] (P1);\n\n\\draw (O1) to[out=95,in=-100] (P1);\n\\draw (O3) to[out=80,in=-87] (P3);\n\n\\coordinate (A1) at (0.4,0);\n\\coordinate (A2) at (1.8,0.5);\n\\coordinate (A3) at (3.4,0.2);\n\\coordinate (A4) at (2.2,-0.6);\n\\coordinate (A5) at (1.2,-0.6);\n\n\\draw (A1) [color=red] -- ++ (-0.6,-1);\n\\draw (A2) [color=red] -- ++ (-0.6,0.8);\n\\draw (A3) [color=red] -- ++ (0.6,0.8);\n\\draw (A4) [color=red] -- ++ (0.2,-0.95);\n\\draw (A5) [color=red] -- ++ (-0.3,-0.85);\n\n\\draw (A1) [color=green] -- (A2);\n\\draw (A2) [color=green] -- (A3);\n\\draw (A3) [color=green] -- (A4);\n\\draw (A4) [color=green] -- (A5);\n\\draw (A5) [color=green] -- (A1);\n\\draw (A2) [color=green] -- (A4);\n\\draw (A2) [color=green] -- (A5);\n\n\\node[scale=0.7,color=red] at (A1) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A2) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A3) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A4) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A5) {$\\bullet$};\n\n\\coordinate (B1) at (1.5,3.2);\n\\coordinate (B2) at (2.6,3.5);\n\\coordinate (B3) at (4.4,3.3);\n\\coordinate (B4) at (3.2,2.4);\n\\coordinate (B5) at (2.2,2.4);\n\n\\draw (B1) [color=red] -- ++ (-0.9,0.8);\n\\draw (B2) [color=red] -- ++ (-0.3,0.8);\n\\draw (B3) [color=red] -- ++ (0.6,0.8);\n\\draw (B4) [color=red] -- ++ (0.2,-0.9);\n\\draw (B5) [color=red] -- ++ (-0.3,-0.9);\n\n\\draw (B1) [color=green] -- (B2);\n\\draw (B2) [color=green] -- (B3);\n\\draw (B3) [color=green] -- (B4);\n\\draw (B4) [color=green] -- (B5);\n\\draw (B5) [color=green] -- (B1);\n\\draw (B2) [color=green] -- (B4);\n\\draw (B2) [color=green] -- (B5);\n\n\\node[scale=0.7,color=red] at (B1) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B2) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B3) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B4) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B5) {$\\bullet$};\n\n\n\t\\end{tikzpicture}\n\t\\caption{Spin network with a boundary: on each spatial slice, the embedded graph $\\Gamma$ punctures the boundary surface of the bounded region of space ${\\mathcal R}$; we distinguish the boundary edges $e\\in \\partial\\Gamma$ in red and the bulk edges $e\\in\\overset{\\circ}{\\Gamma}$ in green; the spin network defines a wave-function for the holonomies living on the bulk edges valued in the Hilbert space attached to the open ends of the boundary edges.\n\t}\n\t\\label{fig:boundary}\n\\end{figure}\n\n\n\n\n\n\n\\medskip\n\n\nThe first section of this paper starts with a quick review of spin network quantum states for the 3d bulk geometry in loop quantum gravity. Then adapting this definition to the case of spatial boundaries, represented as open edges, we show that spin network wave-functions are actually valued in the boundary Hilbert space, i.e. they are functions of bulk $\\mathrm{SU}(2)$ holonomies mapping them onto boundary spin states. The boundary spin states can be understood as quantum boundary conditions. In operational terms, the bulk spin network can be interpreted as a quantum circuit acting on the boundary spins. \nThis opens the door to two directions. Either we sum over boundary states and obtain the probability distribution for the bulk holonomy. Or we can integrate over the bulk holonomies and obtain the {\\it boundary density matrix} defining the distribution of boundary states induced by the bulk spin network. This boundary density matrix can be interpreted as a bulk-to-boundary coarse-graining of the spin network state of quantum geometry.\n\nSection II is dedicated to the analysis of boundary density matrices induced by spin network states on fixed graphs and to a first study of their algebraic structure and properties. Our most important result is a universal bulk reconstruction procedure: starting from a gauge-invariant density matrix on the boundary Hilbert space, we show that one can always obtain it as the induced boundary density matrix of a spin network state on the bulk graph with a single vertex connected to all the boundary edges and to a single bulk loop. This can be understood as a purification result, since it shows how an arbitrary gauge-invariant mixed state on the boundary can be lifted to a pure bulk spin network state.\nWe then go on investigating the finer structure of the induced boundary density matrices in terms of boundary vertices and bouquets of boundary edges.\n\nSection III finally presents explicit examples with the candy graphs, made of two vertices connected with bulk links, with four boundary edges and then with six boundary edges. This illustrates the various levels of mixed states one can obtain on the boundary in loop quantum gravity.\n\n\\section{Spin Networks as Boundary Maps}\n\nFor globally hyperbolic four-dimensional space-times ${\\mathcal M}=\\Sigma\\times{\\mathbb R}$ with closed three-dimensional spatial slices $\\Sigma$,\nloop quantum gravity (LQG) defines quantum states of geometry and describes their constrained evolution in time. A state of 3d geometry are defined by a closed oriented graph $\\Gamma$ and a wave-function $\\psi$ on it. This wave-function depends on one $\\mathrm{SU}(2)$ group element for each edge $e$ of the graph, $g_{e}\\in\\mathrm{SU}(2)$, and is assumed to be invariant under the $\\mathrm{SU}(2)$-action at each vertex $v$ of the graph:\n\\begin{equation} \\label{gauge transformation}\n\\psi(\\{g_{e}\\}_{e\\in\\Gamma})\n=\n\\langle\\{g_{e}\\}_{e\\in\\Gamma} | \\psi\\rangle\n=\n\\psi(\\{h_{t(e)}g_{e}h_{s(e)}^{-1}\\}_{e\\in\\Gamma})\\,\\quad\n\\forall h_{v}\\in\\mathrm{SU}(2)\\,\n\\end{equation}\nwhere $t(e)$ and $s(e)$ respectively refer to the target and source vertices of the edge $e$. We write $E$ and $V$ respectively for the number of edges and vertices of the considered graph $\\Gamma$.\nThe scalar product between such wave-functions is given by the Haar measure on the Lie group $\\mathrm{SU}(2)$:\n\\begin{equation}\n\\langle \\psi|\\widetilde{\\psi}\\rangle\n=\\int_{\\mathrm{SU}(2)^{{\\times E}}}\\prod_{e}\\mathrm{d} g_{e}\\,\n\\overline{\\psi(\\{g_{e}\\}_{e\\in\\Gamma})}\\,\\widetilde{\\psi}(\\{g_{e}\\}_{e\\in\\Gamma})\n\\,.\n\\end{equation}\nThe Hilbert space of quantum states with support on the graph $\\Gamma$ is thus realized as a space of square-integrable functions, ${\\mathcal H}_{\\Gamma}=L^{2}(\\mathrm{SU}(2)^{{\\times E}}\/\\mathrm{SU}(2)^{{\\times V}})$.\n\nA basis of this Hilbert space can be constructed using the spin decomposition of $L^{2}$ functions on the Lie group $\\mathrm{SU}(2)$ according to the Peter-Weyl theorem. A {\\it spin} $j\\in\\frac\\N2$ defines an irreducible unitary representation of $\\mathrm{SU}(2)$, with the action of $\\mathrm{SU}(2)$ group elements realized on a $(2j+1)$-dimensional Hilbert space ${\\mathcal V}_{j}$. We use the standard orthonormal basis $|j,m\\rangle$, labeled by the spin $j$ and the magnetic index $m$ running by integer steps from $-j$ to $+j$, which diagonalizes the $\\mathrm{SU}(2)$ Casimir $\\vec{J}^{2}$ and the $\\u(1)$ generator $J_{z}$. Group elements $g$ are then represented by the $(2j+1)\\times (2j+1)$ Wigner matrices $D^{j}(g)$:\n\\begin{equation}\nD^{j}_{mm'}(g)=\\langle j,m|g|j,m'\\rangle\\,,\\qquad\n\\overline{D^{j}_{mm'}(g)}\n=\nD^{j}_{m'm}(g^{-1})\n\\,.\n\\end{equation}\nThese Wigner matrices form an orthogonal basis of $L^{2}(\\mathrm{SU}(2))$:\n\\begin{equation}\n\\int_{\\mathrm{SU}(2)}\\mathrm{d} g\\,\\overline{D^{j}_{ab}(g)}\\,{D^{k}_{cd}(g)}\n=\n\\int_{\\mathrm{SU}(2)}\\mathrm{d} g\\,\\overline{D^{j}_{ba}(g^{-1})}\\,{D^{k}_{cd}(g)}\n=\n\\frac{\\delta_{jk}}{2j+1}\\delta_{ac}\\delta_{bd}\n\\,,\\qquad\n\\delta(g)\n=\\sum_{j\\in\\frac\\N2}(2j+1)\\chi_{j}(g)\n\\,, \\label{eq:Peter-Weyl}\n\\end{equation}\nwhere $\\chi_{j}$ is the spin-$j$ character defined as the trace of the Wigner matrix, $\\chi_{j}(g)={\\mathrm{Tr}} D^{j}(g)=\\langle j,m|g|j,m\\rangle$.\nApplying this to gauge-invariant wave-functions allows to build the {\\it spin network} basis states of ${\\mathcal H}_{\\Gamma}$, which depend on one spin $j_{e}$ on each edge and one intertwiner $I_{v}$ at each vertex:\n\\begin{equation}\n\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\}_{e\\in\\Gamma})\n=\n\\langle\\{g_{e}\\}) | \\{j_{e},I_{v}\\}\\rangle\n=\n\\sum_{m_{e}^{t,s}}\n\\prod_{e}\\sqrt{2j_{e}+1}\\,\\langle j_{e}m_{e}^{t}|g_{e}|j_{e}m_{e}^{s}\\rangle\n\\,\\prod_{v} \\langle \\bigotimes_{e|\\,v=s(e)} j_{e}m_{e}^{s}|\\,I_{v}\\,|\\bigotimes_{e|\\,v=t(e)}j_{e}m_{e}^{t}\\rangle\n\\,.\n\\end{equation}\n\\begin{figure}[!htb]\n\t\\centering\n\t\\begin{tikzpicture} [scale=1.2]\n\\coordinate (O) at (0,0);\n\n\\node[scale=0.7] at (O) {$\\bullet$} node[below] {$I_v$};\n\n\\draw (O) -- node[midway,sloped]{$>$} ++ (1,1) node[right] {$j_1, m_1$};\n\n\\draw (O) to[bend left=20] node[midway,sloped]{$<$} ++ (0.9,-0.9) node[right] {$j_2, m_2 $};\n\n\\draw (O) to[bend left=20] node[midway,sloped]{$>$} ++ (0,1.5) node[above] {$j_5, m_5$};\n\n\\draw (O) to[bend left=10] node[midway,sloped]{$<$} ++ (-1.2,-0.5) node[left] {$j_3, m_3$};\n\n\\draw (O) to[bend left=10] node[midway,sloped]{$>$} ++ (-1.1,0.6) node[left] {$j_4, m_4$};\n\n\t\\end{tikzpicture}\n\t\\caption{A five-valent intertwiner $I_v$ at vertex $v$ is a $\\mathrm{SU}(2)$-invariant map from the tensor product of the incoming spins (living on the edges $e$ whose target is $v$) to the outgoing spins (living on the edges $e$ whose source is $v$), its matrix elements are $\\langle (j_{1},m_{1})(j_{3},m_{3})(j_{5},m_{5})|I_{v}|(j_{2},m_{2})(j_{4},m_{4})\\rangle$ in the standard spin basis labeled by the spin $j$ and the magnetic moment $m$.}\n\t\\label{fig:intertwiner}\n\\end{figure}\nAs illustrated on fig.\\ref{fig:intertwiner}, an {\\it intertwiner} is a $\\mathrm{SU}(2)$-invariant state -or singlet- living in the tensor product of the incoming and outgoing spins at the vertex $v$:\n\\begin{equation}\nI_{v}\\in\\textrm{Inv}_{\\mathrm{SU}(2)}\\Big{[}\n\\bigotimes_{e|\\,v=s(e)} V_{j_{e}}\n\\otimes\n\\bigotimes_{e|\\,v=t(e)} V_{j_{e}}^{*}\n\\Big{]}\n\\,.\n\\end{equation}\nThe scalar product between two spin network states based on the same graph $\\Gamma$ is then given by the product of the scalar products between their intertwiners:\n\\begin{equation}\n\\langle \\Psi_{\\{j_{e},I_{v}\\}}|\\Psi_{\\{\\tilde{j}_{e},\\tilde{I}_{v}\\}} \\rangle\n=\n\\langle {\\{j_{e},I_{v}\\}}| {\\{\\tilde{j}_{e},\\tilde{I}_{v}\\}}\\rangle\n=\n\\prod_{e}\\delta_{j_{e},\\tilde{j}_{e}}\n\\,\\prod_{v}\\langle I_{v}|\\tilde{I}_{v}\\rangle\n\\,.\n\\end{equation}\n\n\nLoop quantum gravity is formulated on the full Hilbert space of spin network states as a sum over all graphs $\\Gamma$ of the Hilbert spaces ${\\mathcal H}_{\\Gamma}$ , defined as a projective limit taking into account in a consistent way the inclusion of subgraphs into larger graphs \\cite{Ashtekar:1994mh,Thiemann:2007zz}. Then we construct observables as operators either on fixed graphs or allowing transitions between graphs, and we define the dynamics through transition amplitudes, obtained either by suitably regularized Hamiltonian constraint operators \\cite{Thiemann:1996aw,Thiemann:1996av,Borissov:1997ji,Gaul:2000ba,Assanioussi:2015gka} or by spinfoam state-sum models inspired from topological field theory \\cite{Reisenberger:1996pu,Baez:1997zt,Livine:2010zx,Perez:2012wv}.\n\nIn the present work, we are interested in the generalization of the framework to spatial slices with boundaries. As we explain below, such a spatial boundary ${\\mathcal B}=\\partial\\Sigma$, often referred to as {\\it corners} (between space and time) as illustrated on fig.\\ref{fig:corner}, is taken into account by extending the definition of spin networks to graphs with open edges.\n\n\n\\subsection{Corners, boundary states and maps}\n\nWe consider introducing spin networks on a bounded spatial slice similar to taking a bounded subset of a spin network state. As illustrated on fig.\\ref{fig:boundary}, this means considering a graph $\\Gamma$ with open edges $e\\in\\partial\\Gamma$ puncturing the boundary surface. We do not endow the boundary with extra structure, representing the 2d boundary intrinsic geometry as in \\cite{Freidel:2016bxd,Freidel:2018pvm,Freidel:2019ees} or locality on the boundary as in \\cite{Feller:2017ejs}, but discuss the minimal boundary structure.\n\\begin{figure}[htb]\n\t\\centering\n\t\\begin{tikzpicture} []\n\n\\coordinate (O1) at (0,0);\n\\coordinate (O2) at (2,0.7);\n\\coordinate (O3) at (4,0);\n\\coordinate (O4) at (2,-1);\n\n\\coordinate (P1) at (0.8,3);\n\\coordinate (P2) at (2.7,3.7);\n\\coordinate (P3) at (5.3,3);\n\\coordinate (P4) at (2.6,2);\n\n\\draw[dashed,color=blue] (O1) to[out=85,in=160] (O2);\n\\draw[dashed,color=blue] (O2) to[out=-20,in=95] (O3);\n\\draw[color=blue] (O3) to[out=-95,in=2] (O4);\n\\draw[color=blue] (O4) to[out=178,in=-80] (O1);\n\n\\draw[color=blue] (P1) to[out=90,in=160] (P2);\n\\draw[color=blue] (P2) to[out=-20,in=95] (P3);\n\\draw[color=blue] (P3) to[out=-90,in=2] (P4);\n\\draw[color=blue] (P4) to[out=178,in=-80] (P1);\n\n\\draw (O1) to[out=95,in=-100] (P1);\n\\draw (O3) to[out=80,in=-87] (P3);\n\n\\coordinate (A1) at (0.4,0);\n\\coordinate (A2) at (1.8,0.5);\n\\coordinate (A3) at (3.4,0.2);\n\\coordinate (A4) at (2.2,-0.6);\n\\coordinate (A5) at (1.2,-0.6);\n\n\\draw (A1) [color=red] -- ++ (-0.6,-1) node[below]{$e_{1}\\in\\partial\\Gamma$};\n\\draw (A2) [color=red] -- ++ (-0.6,0.8) node[above]{$e_{5}$};\n\\draw (A3) [color=red] -- ++ (0.6,0.8)node[above]{$e_{4}$};\n\\draw (A4) [color=red] -- ++ (0.2,-0.95)node[below]{$e_{3}$};\n\\draw (A5) [color=red] -- ++ (-0.3,-0.85)node[below]{$e_{2}$};\n\n\\draw (A1) [color=green] -- (A2);\n\\draw (A2) [color=green] -- (A3);\n\\draw (A3) [color=green] -- (A4);\n\\draw (A4) [color=green] -- (A5);\n\\draw (A5) [color=green] -- (A1);\n\\draw (A2) [color=green] -- (A4);\n\\draw (A2) [color=green] -- (A5);\n\n\\node[scale=0.7,color=red] at (A1) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A2) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A3) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A4) {$\\bullet$};\n\\node[scale=0.7,color=red] at (A5) {$\\bullet$};\n\n\\coordinate (B1) at (1.5,3.2);\n\\coordinate (B2) at (2.6,3.5);\n\\coordinate (B3) at (4.4,3.3);\n\\coordinate (B4) at (3.2,2.4);\n\\coordinate (B5) at (2.2,2.4);\n\n\\draw (B1) [color=red] -- ++ (-0.9,0.8);\n\\draw (B2) [color=red] -- ++ (-0.3,0.8);\n\\draw (B3) [color=red] -- ++ (0.6,0.8);\n\\draw (B4) [color=red] -- ++ (0.2,-0.9);\n\\draw (B5) [color=red] -- ++ (-0.3,-0.9);\n\n\\node[scale=0.7,color=red] at (B1) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B2) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B3) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B4) {$\\bullet$};\n\\node[scale=0.7,color=red] at (B5) {$\\bullet$};\n\n\\draw (B1) [color=green] -- (B2);\n\\draw (B2) [color=green] -- (B3);\n\\draw (B3) [color=green] -- (B4);\n\\draw (B4) [color=green] -- (B5);\n\\draw (B5) [color=green] -- (B1);\n\\draw (B2) [color=green] -- (B4);\n\\draw (B2) [color=green] -- (B5);\n\n\n\t\\end{tikzpicture}\n\t\\caption{On each spatial slice, the boundary states consist in the tensor product of spin states living on the boundary edges of the spin network: ${\\mathcal H}^{\\partial}_{\\Gamma}=\\bigotimes_{e\\in\\partial\\Gamma}\\bigoplus_{j_{e}}{\\mathcal V}_{j_{e}}$.}\n\t\\label{fig:boundary}\n\\end{figure}\n\n\nEach boundary edge $e\\in\\partial\\Gamma$ carries a spin $j_{e}$ and a vector in the corresponding representation $v_{e}\\in{\\mathcal V}_{j_{e}}$. This defines the boundary Hilbert space as:\n\\begin{equation}\n{\\mathcal H}^{\\{j_{e}\\}_{e\\in\\partial\\Gamma}}_{\\Gamma}\n=\n\\bigotimes_{e\\in\\partial\\Gamma}{\\mathcal V}_{j_{e}}\n\\,.\n\\end{equation} \\label{eq:boundary-state}\nOne does not need to fix the spins carried by the boundary edges and can consider the larger boundary Hilbert space:\n\\begin{equation}\n{\\mathcal H}^{\\partial}_{\\Gamma}\n=\\bigoplus_{\\{j_{e}\\}}{\\mathcal H}^{\\{j_{e}\\}_{e\\in\\partial\\Gamma}}_{\\Gamma}\n=\\bigotimes_{e\\in\\partial\\Gamma}{\\mathcal V}_{e}\n\\qquad\\textrm{with}\\quad\n{\\mathcal V}=\\bigoplus_{j}{\\mathcal V}_{j}\\,.\n\\end{equation}\nUsing the Schwinger realization of the ${\\mathfrak{su}}(2)$ Lie algebra in terms of a pair of quantum oscillators, the Hilbert space ${\\mathcal V}$ is the tensor product of two copies of the harmonic oscillator Hilbert space, which can be understood as (holomorphic) wave-functions of a spinor, i.e. a complex 2-vector \\cite{Freidel:2009ck,Borja:2010rc,Livine:2011gp,Dupuis:2012vp,Livine:2013zha,Alesci:2015yqa,Bianchi:2015fra,Bianchi:2016tmw}.\n\nCalling $\\overset{\\circ}{\\Gamma}=\\Gamma\\setminus\\partial\\Gamma$ the bulk or interior of the graph $\\Gamma$, a spin network wave-function on the graph $\\Gamma$ with boundary is still a function of group elements living on bulk edges $e\\in\\Gamma\\setminus\\partial\\Gamma$, but is not anymore valued in the field ${\\mathbb C}$ but into the boundary Hilbert space ${\\mathcal H}^{\\partial}_{\\Gamma}$:\n\\begin{equation}\n\\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\,\\in\\,{\\mathcal H}^{\\partial}_{\\Gamma}\n\\,.\n\\end{equation}\nThe scalar product between wave-functions is inherited from the inner product between boundary states:\n\\begin{equation} \\label{eq:Definition-InnerProduct}\n\\langle \\psi|\\widetilde{\\psi}\\rangle\n=\n\\int\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\mathrm{d} g_{e}\\,\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|\\widetilde{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\n\\,,\n\\end{equation}\nwith the normalization of wave-functions reading as:\n\\begin{equation}\n\\langle \\psi|{\\psi}\\rangle\n=\n\\int\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\mathrm{d} g_{e}\\,\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\n=1\n\\,.\n\\end{equation}\n\nTo be more precise, it should actually be considered as a linear form on the boundary Hilbert space and thus live in the dual Hilbert space, $\\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\,\\in\\,({\\mathcal H}^{\\partial}_{\\Gamma})^{*}$.\nThis means that it defines a distribution on boundary states depending on the group elements, or holonomies, living in the bulk:\n\\begin{equation}\n\\forall \\Phi\\in {\\mathcal H}^{\\partial}_{\\Gamma}\n\\,,\\quad\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\,|\\,\\Phi\\rangle \\,\\,\\in{\\mathbb C}\n\\,.\n\\end{equation}\nIn simpler words, a spin network state is now a map on boundary states (or corner states), which we will loosely refer to as a boundary map.\n\n\\medskip\n\nThe statement of gauge invariance also has to take into account the boundary: the wave-function will be invariant with respect to bulk gauge transformations while it will be covariant under gauge transformations on the boundary.\nMore precisely, we distinguish bulk vertices $v\\in V^{o}$ that are not connected to any boundary edge and boundary vertices $v\\in V_{\\partial}$ that are attached to at least one boundary edge. The wave-function is assumed to be invariant under $\\mathrm{SU}(2)$ transformations acting at bulk vertices, while $\\mathrm{SU}(2)$ transformations acting at boundary vertices will act on the spin states dressing the boundary edges:\n\\begin{equation}\n|\\psi(\\{h_{t(e)}g_{e}h_{s(e)}^{-1}\\})\\rangle\n=\n\\left(\\bigotimes_{e\\partial\\Gamma} h_{v(e)}^{\\epsilon_{e}^{v}}\\right)\n\\,|\\psi(\\{g_{e}\\})\\rangle\n\\,\n\\end{equation}\nwhere $v(e)$ for $e\\in\\partial\\Gamma$ denotes the vertex to which the boundary edge is attached and $\\epsilon_{e}^{v}=1$ is the boundary edge is outgoing ($v(e)=s(e)$) while $\\epsilon_{e}^{v}=-1$ is the boundary edge is incoming ($v(e)=t(e)$).\n\n\\medskip\n\nThe definition of the spin network basis states can then be adapted to the case with boundary:\n\\begin{eqnarray} \\label{eq:spin-network-with-boundary}\n\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\n&=&\n\\sum_{m_{e}^{t,s}}\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\sqrt{2j_{e}+1}\\,\\langle j_{e}m_{e}^{t}|g_{e}|j_{e}m_{e}^{s}\\rangle\n\\,\\prod_{v} \\langle \\bigotimes_{e\\in\\overset{\\circ}{\\Gamma}|\\,v=s(e)} j_{e}m_{e}^{s}\n|\\,I_{v}\\,|\n\\bigotimes_{e\\in\\overset{\\circ}{\\Gamma}|\\,v=t(e)} j_{e}m_{e}^{t}\\rangle\n\\\\\n&\\in&\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ t(e)\\in\\Gamma}} {\\mathcal V}_{j_{e}}^*\n\\,\\otimes\\,\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ s(e)\\in\\Gamma}} {\\mathcal V}_{j_{e}}\n\\nonumber\n\\,.\n\\end{eqnarray}\nWe sum over the magnetic indices $m$'s only for the bulk edges. The spin states on the boundary edges are not contracted, so that the wave-function $\\Psi_{\\{j_{e},I_{v}\\}}$ is valued in the boundary Hilbert space ${\\mathcal H}^\\partial_{\\Gamma}$.\nThis can be made more explicit by writing the wave-function $\\psi$ as a tensor by evaluating on a basis of boundary states,\n\\begin{equation}\n\\psi^{\\{j_{e},m_{e}\\}_{e\\in\\partial\\Gamma}}\n=\n\\langle \\otimes_{e\\in\\partial\\Gamma}j_{e},m_{e}\\,|\\,\\psi\\rangle\n\\,.\n\\end{equation}\nAssuming that boundary edges are outgoing for the sake of simplicity, this gives for spin network basis states:\n\\begin{eqnarray}\n\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\})^{\\{j_{e},m_{e}^{s}\\}_{e\\in\\partial\\Gamma}}\n&=&\n\\langle \\otimes_{e\\in\\partial\\Gamma}j_{e},m_{e}^{s}\\,|\\,\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\})\\rangle\n\\\\\n&=&\n\\sum_{m_{e}^{t,s}}\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\sqrt{2j_{e}+1}\\,\\langle j_{e}m_{e}^{t}|g_{e}|j_{e}m_{e}^{s}\\rangle\n\\,\\prod_{v} \\langle \\bigotimes_{e\\in\\Gamma|\\,v=s(e)} j_{e}m_{e}^{s}\n|\\,I_{v}\\,|\n\\bigotimes_{e\\in\\overset{\\circ}{\\Gamma}|\\,v=t(e)} j_{e}m_{e}^{t}\\rangle\n\\nonumber\n\\end{eqnarray}\nThe scalar product between those wave-functions is given by the scalar product of the bulk intertwiner as for the no-boundary case:\n\\begin{eqnarray}\n\\langle \\Psi_{\\{j_{e},I_{v}\\}}|\\Psi_{\\{\\tilde{j}_{e},\\tilde{I}_{v}\\}} \\rangle\n&=&\n\\sum_{\\{k_{e},m_{e}\\}}\n\\overline{\\Psi_{\\{j_{e},I_{v}\\}}(\\{g_{e}\\})^{\\{k_{e},m_{e}\\}}}\n\\,\\Psi_{\\{\\tilde{j}_{e},\\tilde{I}_{v}\\}}(\\{g_{e}\\})^{\\{k_{e},m_{e}\\}_{e\\in\\partial\\Gamma}}\n\\nonumber\\\\\n&=&\n\\prod_{e}\\delta_{j_{e},\\tilde{j}_{e}}\n\\,\\prod_{v}\\langle I_{v}|\\tilde{I}_{v}\\rangle\n\\,.\n\\end{eqnarray}\n\n\\subsection{Bulk probability}\n\nNow that the bulk wave-function has been promoted to a map from bulk degrees of freedom to boundary state, in a logic following (Atiyah's axiomatization of) topological field theories, the corresponding probability distribution for the bulk fields is given by the boundary space scalar product instead of the mere squared modulus:\n\\begin{equation}\n\\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\,\\in\\,{\\mathcal H}^{\\partial}_{\\Gamma}\n\\,,\n\\qquad\n{\\mathcal P}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})=\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\n\\,.\n\\end{equation}\n\\begin{figure}[htb]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\\coordinate (O) at (-5,0);\n \n\\path (O) ++(160:2) coordinate (O1);\n\\path (O) ++(120:2) coordinate (O2);\n\\path (O) ++(80:2) coordinate (O3);\n\\path (O) ++(40:2) coordinate (O4);\n\n\\draw[thick] (O1) to[bend right=30] (O4);\n\\draw[thick] (O1) to[bend right=30] (O3);\n\n\\draw[thick,red] (O1) -- ++(160:1) ++(160:0.35) node {$j_1$};\n\\draw[thick,red] (O2) -- ++(120:1) ++(120:0.35) node {$j_2$};\n\\draw[thick,red] (O3) -- ++(80:1) ++(80:0.35) node {$j_3$};\n\\draw[thick,red] (O4) -- ++(40:1) ++(40:0.35) node {$j_4$};\n\n \\draw [green,thick,domain=40:160] plot ({-5+2*cos(\\x)}, {2*sin(\\x)});\n\n \\draw [thick,domain=160:400] plot ({-5+2*cos(\\x)}, {2*sin(\\x)});\n \n \\draw[->,>=stealth,very thick] (-2,0) -- node [midway, above] {gluing its copy} (2,0);\n \n\\draw (O1) node[scale=0.7,red] {$\\bullet$};\n\\draw (O2) node[scale=0.7,red] {$\\bullet$};\n\\draw (O3) node[scale=0.7,red] {$\\bullet$};\n\\draw (O4) node[scale=0.7,red] {$\\bullet$};\n\n\\coordinate (A) at (5,0);\n\\coordinate (B) at (11,0);\n\\path (A) ++(60:2) coordinate (A1);\n\\path (A) ++(20:2) coordinate (A2);\n\\path (A) ++(-20:2) coordinate (A3);\n\\path (A) ++(-60:2) coordinate (A4);\n\n\\path (B) ++(120:2) coordinate (B1);\n\\path (B) ++(160:2) coordinate (B2);\n\\path (B) ++(200:2) coordinate (B3);\n\\path (B) ++(240:2) coordinate (B4);\n\n \\draw [green,thick,domain=120:240] plot ({11+2*cos(\\x)}, {2*sin(\\x)});\n \\draw [green,thick,domain=-60:60] plot ({5+2*cos(\\x)}, {2*sin(\\x)});\n\n \\draw [thick,domain=240:480] plot ({11+2*cos(\\x)}, {2*sin(\\x)});\n \\draw [thick,domain=60:300] plot ({5+2*cos(\\x)}, {2*sin(\\x)});\n\n\\draw[red,thick] (A1) -- node[above,midway,scale=0.7] {$j_1$} (B1);\n\\draw[red,thick] (A2) -- node[above,midway,scale=0.7] {$j_2$} (B2);\n\\draw[red,thick] (A3) -- node[above,midway,scale=0.7] {$j_3$} (B3);\n\\draw[red,thick] (A4) -- node[above,midway,scale=0.7] {$j_4$} (B4);\n\n\n\\draw[thick] (B1) to[bend left=30] (B4);\n\\draw[thick] (B1) to[bend left=30] (B3);\n\\draw[thick] (A1) to[bend right=30] (A4);\n\\draw[thick] (A1) to[bend right=30] (A3);\n\n\\draw (A1) node[scale=0.7,red] {$\\bullet$};\n\\draw (A2) node[scale=0.7,red] {$\\bullet$};\n\\draw (A3) node[scale=0.7,red] {$\\bullet$};\n\\draw (A4) node[scale=0.7,red] {$\\bullet$};\n\n\\draw (B1) node[scale=0.7,red] {$\\bullet$};\n\\draw (B2) node[scale=0.7,red] {$\\bullet$};\n\\draw (B3) node[scale=0.7,red] {$\\bullet$};\n\\draw (B4) node[scale=0.7,red] {$\\bullet$};\n\n\\end{tikzpicture}\n\\caption{\nGluing the two copies of the spin network into the boundary density matrix: boundary edges (red lines) are glued together using the boundary space scalar product, and for each copy; the maximal tree for the bulk gauge fixing consist in the green edges, while the remaining edges, in black, define the independent loops of the bulk graph .\n} \\label{fig:GluingBoundaryEdges}\n\\end{figure}\n\nAs illustrated on fig.\\ref{fig:GluingBoundaryEdges}, we are gluing two copies of the spin network with trivial holonomies along the open edges on the boundary.\nThis yields a totally gauge-invariant probability distribution, despite the gauge covariance of the waver-function under boundary gauge transformations:\n\\begin{equation}\n{\\mathcal P}(\\{g_{e}\\}_{e\\in\\Gamma})\n=\n{\\mathcal P}(\\{h_{t(e)}g_{e}h_{s(e)}^{-1}\\}_{e\\in\\Gamma})\\,\\quad\n\\forall h_{v}\\in\\mathrm{SU}(2)^{V}\n\\,,\n\\end{equation}\nwith no difference between bulk and boundary vertices or edges.\n\nFollowing the earlier work on spin networks \\cite{Freidel:2002xb} and subsequent works \\cite{Livine:2006xk,Livine:2007sy,Livine:2008iq,Livine:2013gna, Charles:2016xwc, Anza:2016fix}, we can gauge-fix this gauge invariance down to a single $\\mathrm{SU}(2)$ action.\nTo this purpose, one chooses an arbitrary root vertex $v_{0}\\in\\Gamma$ and a maximal tree in the bulk graph $T\\subset\\Gamma^{o}$. A tree is a set of edges that never form any cycle (or loop). A maximal tree $T$ has $(V-1)$ edges. Furthermore, for any vertex $v\\in\\Gamma$, it defines a unique path of edges $P[v_{0}\\rightarrow v]\\subset T$ along the tree linking the root vertex $v_{0}$ to the vertex $v$. This allows to gauge-fix all the group elements along tree edges to the identity, $g_{e\\in T}\\mapsto\\mathbb{I}$, by choosing gauge transformations $h_{v}$ at every vertex but the root vertex as:\n\\begin{equation}\nh_{v}=\\left(\\overleftarrow{\\prod_{\\ell\\in P[v_{0}\\rightarrow v]}} g_{\\ell}\\right)^{-1}\\,,\n\\end{equation}\nwhere the product of group elements is taken from right to left over $g_{\\ell}$ if the edge $\\ell$ is oriented in the same direction than the path $P[v_{0}\\rightarrow v]$ and over its inverse $g_{\\ell}^{-1}$ otherwise.\nThis maps all the group elements on tree edges to the identity, $h_{t(e)}g_{e}h_{s(e)}^{-1}=\\mathbb{I}$ for $e\\in T$. The remaining edges, which do not belong the tree actually correspond to a minimal generating set of loops (or cycles) on the bulk graph $\\Gamma^{o}$. Indeed, each non-tree edge defines a loop from the root vertex to the edge and back,\n\\begin{equation}\n{\\mathcal L}_{e\\notin T}:v_{0}\\underset{T}{\\rightarrow}s(e)\\underset{e}{\\rightarrow}t(e)\\underset{T}{\\rightarrow}v_{0}\n\\,.\\nonumber\n\\end{equation}\nThere are $L=E-V+1$ edges not belonging to $T$, defining $L$ such loops. One can show that every cycle on the bulk graph $\\Gamma^{o}$ can generated from those cycles.\nFor $e\\notin T$, the gauge transformation built above does not map the group element $g_{e}$ to the identity anymore but maps it to the holonomy around the corresponding loop,\n\\begin{equation}\n\\forall e\\notin T\\,,\\qquad\nh_{t(e)}g_{e}h_{s(e)}^{-1}\n=\n\\overleftarrow{\\prod_{\\ell\\in {\\mathcal L}_{e}}} g_{\\ell}\n\\equiv\nG_{e}\n\\,.\\nonumber\n\\end{equation}\nAs a consequence, the bulk probability distribution depends only on those $L$ group elements:\n\\begin{equation}\n{\\mathcal P}(\\{g_{e}\\}_{e\\in\\Gamma})\n=\n{\\mathcal P}(\\{G_{e}\\}_{e\\notin T},\\{\\mathbb{I}\\}_{e\\in T})\n\\equiv\n{\\mathcal P}_{GF}(G_{1},..,G_{L})\n\\,.\n\\end{equation}\nPutting aside the gauge-fixed group elements living on the tree edges and focusing on the non-trivial loop holonomies, this gauge-fixed bulk probability ${\\mathcal P}_{GF}$ is still invariant under gauge transformation at the root vertex $v_{0}$:\n\\begin{equation}\n{\\mathcal P}_{GF}(G_{1},..,G_{L})=\n{\\mathcal P}_{GF}(h \\, G_1 \\, h^{-1},\\cdots,h \\, G_L \\, h^{-1})\n\\,, \\quad \\forall \\, h \\in \\mathrm{SU}(2)\n\\,.\n\\end{equation}\nThis directly implies two simple results:\n\\begin{prop} \\label{theorem:ExtremalPoint}\nThe configuration $G_1=\\cdots=G_L=\\mathbb{I}$, representing a flat $\\mathrm{SU}(2)$ connection, is always a stationary point for the bulk probability function ${\\mathcal P}(\\{g_{e}\\})=\\langle {\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) | {\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle$.\n\\end{prop}\n\\begin{prop} \\label{coro:Norm_tree-v.s.-loop}\nIf the bulk graph $\\overset{\\circ}{\\Gamma}$ is a tree, i.e. does not contain any loop, then the bulk probability function ${\\mathcal P}(\\{g_{e}\\})$ is constant and does not depend on the bulk holonomies $g_{e}$.\n\\end{prop}\n\n\n\n\n\\subsection{Spin network maps as quantum circuits}\n\nWe would like to build on the interpretation of spin network wave-functions as valued in the space linear forms on the boundary Hilbert space, or boundary maps. This can be translated operationally as spin networks defining quantum circuits on the boundary data.\n\nLet us fix the spins on the boundary edges and distinguish their orientation. Then a spin network wave-function for the bulk graph defines a family of maps, between the spins on the incoming boundary edges to the spins on the outgoing boundary edges, labeled by the holonomies living on the bulk links:\n\\begin{equation}\n\\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\n\\,:\\,\\,\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ t(e)\\in\\Gamma}} {\\mathcal V}_{j_{e}}\n\\longmapsto\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ s(e)\\in\\Gamma}} {\\mathcal V}_{j_{e}}\n\\,.\n\\end{equation}\nOf course, we could unfix the boundary spins and more generally attach the larger Hilbert space ${\\mathcal V}=\\bigoplus_{j}{\\mathcal V}_{j}$ to each boundary edge.\nAs illustrated on fig.\\ref{fig:spinnetcircuit}, the spin network graph, with its link and node structure, already carries the natural structure of a circuit. The holonomies, or $\\mathrm{SU}(2)$ group elements, on the graph links are interpreted as unitary one-spin gates, while the intertwiners, or $\\mathrm{SU}(2)$-invariant maps, naturally define multi-spins gates.\n\\begin{figure}[htb]\n\t\\centering\n\\begin{tikzpicture} []\n\\draw[thick,decorate,decoration={brace},xshift=-4pt,yshift=0pt]\n(0,0) -- (0,1) -- (0,2) node [black,yshift=-1cm,xshift=-1.35cm] {Incoming edges};\n \\draw[thick] (0,1)--node[midway,above]{\\footnotesize Spin $| j_2, m_2 \\rangle$}(2,1) node[midway,sloped]{$>$};\n \\draw[thick] (0,2)--node[midway,above]{\\footnotesize Spin $| j_1, m_2 \\rangle$}(2,2) node[midway,sloped]{$>$};\n \\draw[thick] (0,0)--node[left,above]{\\footnotesize Spin $| j_3, m_3 \\rangle$} node[midway,sloped]{$>$} (2,0);\n \\draw[thick] (3,0) -- (4.5,0) node[midway,sloped]{$>$};\n \\draw[thick] (2,-0.5) rectangle (3,2.5) node [pos=.5]{${\\mathcal I}_A$};\n \\draw[thick] (4.5,-0.5) rectangle (5.5,1.5) node [pos=.5]{${\\mathcal I}_B$};\n \\draw[thick] (7,0.5) rectangle (8,2.5) node [pos=.5]{${\\mathcal I}_C$};\n \\draw[thick] (4.5,1)--(4,1);\n \\draw (3.75,1) circle (0.25) node {$g_2$};\n \\draw[thick] (3.5,1)--(3,1)node[midway,sloped]{$<$};\n \\draw[thick] (6.5,1)--(7,1) node[midway,sloped]{$>$};\n \\draw (6.25,1) circle (0.25) node {$g_3$};\n \\draw[thick] (5.5,1)--(6,1);\n \\draw[thick] (3,0)--(4.5,0);\n \\draw[thick] (5.5,0)--(8.5,0)node[midway,sloped]{$>$};\n \\draw[thick] (3,2)--(4.75,2);\n \\draw (5,2) circle (0.25) node {$g_1$};\n \\draw[thick] (5.25,2)--(7,2)node[midway,sloped]{$>$};\n \\draw[thick] (8,2)--(8.5,2)node[midway,sloped]{$>$};\n \n \n \n \\draw[thick,decorate,decoration={brace,mirror},xshift=4pt,yshift=0pt]\n (8.5,0) -- (8.5,2) node [black,midway,xshift=1.35cm] {Outgoing edges};\n\\end{tikzpicture}\n\t\\caption{Spin network as a quantum circuit: holonomies become unitary one-spin gates while intertwiners are multi-spin gates; the circuit can contains loops.}\n\t\\label{fig:spinnetcircuit}\n\\end{figure}\n\nThe spin network state is not a process in itself. There are two important points to keep in mind. First, a spin network is a spatial construct, and not directly a space-time structure. A spin network is not a (quantum) causal history (see e.g. \\cite{Markopoulou:1999cz,Hawkins:2003vc} for a presentation and discussion on quantum causal histories). The maps that it defines between the boundary spins are thus possible processes that might occur if the spin network state itself (i.e. the quantum state of 3D geometry) does not evolve. In that sense, it is truly a circuit, to which we haven't yet sent an input and on which we can still adjust some parameters. Indeed, the second important remark is that the holonomies are not fixed. The spin network defines a whole family of boundary maps, which vary in the individual one-spin gates defined by the holonomies $\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}$ along the bulk edges. From the point of view of the boundary, these holonomies are not fixed, they should either be averaged over or some other criteria should be found to determine them. For instance, the holonomies, or more precisely their quantum probability distribution, should ultimately be determined by the dynamics of quantum gravity.\nNevertheless, even without exploring the issue of defining the dynamics of loop quantum gravity, either by a Hamiltonian constraint operator or by spinfoam transition amplitudes, this quantum circuit perspective allows to formulate interesting questions:\n\\begin{itemize}\n\n\\item Working with a given spin network state, with a fixed graph, fixed spins and intertwiners, can we characterize the resulting subset of boundary maps induced by allowing for arbitrary holonomies along the edges? Or vice-versa, how much does a boundary state (for both incoming and outgoing boundary spins) fixes the holonomies in the bulk? Could this be used to formulate a holographic principle for loop quantum gravity?\n\n\\item Going further, looking at the spin network state as a black box, with access solely to the boundary spins, if we know the subset of boundary maps that it defines, how much of the bulk graph and intertwiners can we re-construct? Could one think of the diffeomorphism constraints of loop quantum gravity as identifying spin network states which lead to same set of boundary maps? This would be an holographic implementation of the dynamics through bulk-to-boundary coarse-graining, along the lines of \\cite{Livine:2017xww}.\n\n\\item The issue of defining the dynamics or the coarse-graining of the theory is actually equivalent to the problem of defining a physical inner product or a flow of inner products from the microscopic theory to a coarse-grained macroscopic theory. The quantum circuit perspective offers a possible approach. The microscopic inner product between quantum circuit is defined as the loop quantum gravity kinematical inner product, reflecting the scalar product between intertwiners, i.e. the basic multi-spin gates. As we coarse-grain or sparsify the quantum circuit (while possibly not affecting the boundary maps), we reduce the bulk structure of the circuit by encompassing subsets of holonomies and intertwiners into single larger multi-spin gates, thus leading to a scalar product between those multi-spin gates. The ultimate stage is the fully coarse-grain state, directly provided with the inner product between boundary maps. Studying this in more details would reveal the coarse-graining flow of spin network states in loop quantum gravity.\n\n\\end{itemize}\n\nAlthough these topics are very likely essential to the understanding of the renormalization flow, holographic behavior and semi-classical regime of loop quantum gravity, they are broad questions out of the scope of the present work and are postponed to future investigation.\n\n\n\\section{Boundary Density Matrix}\n\n\n\n\\subsection{Bulk state to Boundary density matrix}\n\nWe would like to shift the focus from the bulk to the boundary and investigate in more details the boundary state induced by the bulk spin network state defined as the density matrix obtained by integrating over the group elements, or in other words, taking the partial trace over bulk holonomies:\n\\begin{equation} \\label{eq:Coarse-graining}\n\\rho_{\\partial\\Gamma}[\\psi]=\n\\int\n\\prod_{e\\in\\overset{\\circ}{\\Gamma}}\\mathrm{d} g_{e}\\,\n|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|\n\\,\\in\n\\textrm{End}({\\mathcal H}^{\\partial}_{\\Gamma})\n\\,,\n\\end{equation}\n\\begin{equation}\n{\\mathrm{Tr}}\\,\n\\rho_{\\partial\\Gamma}[\\psi]\n=\n\\int[\\mathrm{d} g_{e}]\\,\n\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle\n=\n\\int[\\mathrm{d} g_{e}]\\,\n{\\mathcal P}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\n\\,.\\nonumber\n\\end{equation}\nThis mixed state on the boundary can be considered as a coarse-graining of the bulk spin network state \\cite{Livine:2006xk,Bianchi:2013toa}.\nThe goal of this paper is to compare the data encoded in the bulk wave-function $\\psi_{\\Gamma}$ and in the induced boundary density matrix $\\rho_{\\partial\\Gamma}$.\n\\begin{figure}[htb!]\n\t\\centering\n\t\\begin{tikzpicture} []\n\n\\coordinate(O1) at (1,3);\n\\coordinate(O2) at (2.4,3.3);\n\\coordinate(O3) at (2.7,3);\n\\coordinate(O4) at (2.3,2.7);\n\\coordinate(O5) at (2,3.2);\n\\draw (O1) -- ++(-1,0.4);\n\\draw (O1) -- ++(-1,-0.4);\n\\draw (O2) -- ++(1,0.2);\n\\draw (O3) -- ++(1,0);\n\\draw (O4) -- ++(0.5,-0.3);\n\n\\draw (O1) to[bend left] (O5);\n\\draw (O1) to[bend right] (O5);\n\\draw (O2) to (O5);\n\\draw (O2) to (O3);\n\\draw (O3) to (O4);\n\\draw (O4) to (O5);\n\n\\node[scale=0.75] at (O1) {$\\bullet$};\n\\node[scale=0.75] at (O2) {$\\bullet$};\n\\node[scale=0.75] at (O3) {$\\bullet$};\n\\node[scale=0.75] at (O4) {$\\bullet$};\n\\node[scale=0.75] at (O5) {$\\bullet$};\n\n\\coordinate(P1) at (1,1);\n\\coordinate(P2) at (2.4,1.3);\n\\coordinate(P3) at (2.7,1);\n\\coordinate(P4) at (2.3,0.7);\n\\coordinate(P5) at (2,1.2);\n\\draw (P1) -- ++(-1,0.4);\n\\draw (P1) -- ++(-1,-0.4);\n\\draw (P2) -- ++(1,0.2);\n\\draw (P3) -- ++(1,0);\n\\draw (P4) -- ++(0.5,-0.3);\n\n\\draw (P1) to[bend left] (P5);\n\\draw (P1) to[bend right] (P5);\n\\draw (P2) to (P5);\n\\draw (P2) to (P3);\n\\draw (P3) to (P4);\n\\draw (P4) to (P5);\n\n\\node[scale=0.75] at (P1) {$\\bullet$};\n\\node[scale=0.75] at (P2) {$\\bullet$};\n\\node[scale=0.75] at (P3) {$\\bullet$};\n\\node[scale=0.75] at (P4) {$\\bullet$};\n\\node[scale=0.75] at (P5) {$\\bullet$};\n\n\\draw[->,>=stealth,very thick] (4,2) -- node [midway, above] {$\\displaystyle{ \\int \\prod_{ e\\in \\overset{\\circ}{\\Gamma} } \\mathrm{d} g_e }$} (6,2);\n\n\\coordinate(A1) at (8,3);\n\\coordinate(A2) at (8.1,3);\n\\coordinate(A3) at (9.5,3);\n\\coordinate(A4) at (9.4,3);\n\\draw (A1) -- ++(-0.7,0.4);\n\\draw (A1) -- ++(-0.7,-0.4);\n\\draw (A3) -- ++(0.7,0);\n\\draw (A3) -- ++(0.7,0.3);\n\\draw (A3) -- ++(0.5,-0.4);\n\n\\draw (A1) to (A2);\n\\draw (A3) to (A4);\n\\draw[dashed] (A2) to[bend left] (A4);\n\n\\coordinate(B1) at (8,1);\n\\coordinate(B2) at (8.1,1);\n\\coordinate(B3) at (9.5,1);\n\\coordinate(B4) at (9.4,1);\n\\draw (B1) -- ++(-0.7,0.4);\n\\draw (B1) -- ++(-0.7,-0.4);\n\\draw (B3) -- ++(0.7,0);\n\\draw (B3) -- ++(0.7,0.3);\n\\draw (B3) -- ++(0.5,-0.4);\n\n\\draw (B1) to (B2);\n\\draw (B3) to (B4);\n\\draw[dashed] (B2) to[bend right] (B4);\n\n\\draw[color=red] (A2) to[bend left] (B2);\n\\draw[color=red] (A4) to[bend right] (B4);\n\n\\node[scale=0.75] at (A2) {$\\bullet$};\n\\node[scale=0.75] at (A4) {$\\bullet$};\n\\node[scale=0.75] at (B2) {$\\bullet$};\n\\node[scale=0.75] at (B4) {$\\bullet$};\n\n\n\t\\end{tikzpicture}\n\t\\caption{Boundary density matrix for spin network basis states. The two copies of the spin networks are the bra $\\langle \\psi |$ and ket $| \\psi \\rangle$ which are glued together by the Haar integration over the bulk holonomies $\\int \\prod \\mathrm{d} g_{ e\\in \\overset{\\circ}{\\Gamma} }$. }\n\t\\label{fig:densitymatrix}\n\\end{figure}\n\n\\smallskip\n\nLet us start by looking at normalized pure spin network basis states, i.e. with fixed spins $j_{e}$ and fixed normalized intertwiners $I_{v}$. They are factorized states in the sense that the intertwiners are decoupled so that there is no intertwiner entanglement as discussed in \\cite{Livine:2017fgq}. As a result, the boundary state only depends on the intertwiners living on the boundary vertices (i.e. the vertices with at least one boundary edge) and not on the bulk intertwiners.\nLet us insist that ``boundary vertices'' are still in the bulk, the adjective ``boundary'' refers to the fact that they are connected to boundary edges.\nIndeed, the orthonormality of the Wigner matrices implies that that each bulk edge is cut in half and both half-edges are glued with their counterparts on the second copy of the wave-function, as illustrated on fig.\\ref{fig:densitymatrix}. We get the norm of every bulk intertwiner, normalized to 1, times the contribution from boundary intertwiners which gives the boundary density matrix:\n\\begin{eqnarray}\n\\langle \\{\\tilde{k}_{e},\\tilde{m}_e\\}\\,|\n\\rho_{\\partial\\Gamma}[\\Psi_{\\{j_{e},I_{v}\\}}]\n| \\{k_{e},m_e\\}\\rangle\n=&\\prod_{e}\\delta_{k_{e},j_{e}}\\delta_{\\tilde{k}_{e},j_{e}}\n\\prod_{v\\in\\partial\\Gamma}&\n\\langle\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ v\\in e}} j_{e}\\tilde{m}_{e}\n\\otimes\n\\bigotimes_{\\substack{e\\in\\overset{\\circ}{\\Gamma}\\\\ v=s(e)}} j_{e}m_{e}^{s}\n|\\,I_{v}\\,|\n\\bigotimes_{\\substack{e\\in\\overset{\\circ}{\\Gamma}\\\\ v=t(e)}} j_{e}m_{e}^{t}\n\\rangle \\nonumber\\\\\n&&\\overline{\n\\langle\n\\bigotimes_{\\substack{e\\in\\partial\\Gamma\\\\ v\\in e}} j_{e}m_{e}\n\\otimes\n\\bigotimes_{\\substack{e\\in\\overset{\\circ}{\\Gamma}\\\\ v=s(e)}} j_{e}m_{e}^{s}\n|\\,I_{v}\\,|\n\\bigotimes_{\\substack{e\\in\\overset{\\circ}{\\Gamma}\\\\ v=t(e)}} j_{e}m_{e}^{t}\n\\rangle\n}\n\\,.\n\\end{eqnarray}\nAssuming that each boundary edge is attached to a different vertex, i.e. each boundary vertex connects to a single boundary edge, this tremendously simplifies. Indeed, as illustrated on fig. \\ref{fig:boundaryvertex}, the self-gluing of an intertwiner on itself leads to the identity matrix on the open edge. As a consequence, the density matrix is the totally mixed state with fixed spin on each boundary edge:\n\\begin{equation}\n\\rho_{\\partial\\Gamma}[\\Psi_{\\{j_{e},I_{v}\\}}]\n=\n\\bigotimes_{e\\in\\partial\\Gamma} \\frac{\\mathbb{I}_{j_{e}}}{(2j_{e}+1)}\n\\,.\n\\end{equation} \nThis boundary density matrix, for a spin network basis state, clearly does not allow to see the bulk structure!\n\nIn the slightly more general case of boundary vertices connected to several boundary edges, the boundary density matrix reflects the first layer of the bulk and ``sees'' the total recoupled spin of the boundary edges for each boundary edge. We will analyze this case in more details in the later section \\ref{sec:manyedges}.\n\n\\begin{figure}[htb]\n\\vspace*{5mm}\n\\begin{subfigure}{0.4\\linewidth}\n\t\\begin{tikzpicture}\n\\coordinate (O1) at (-0.8,0);\n\\coordinate (O2) at (0.45,0);\n\\coordinate (O3) at (1.2,0.75);\n\\coordinate (O4) at (1.2,0);\n\\coordinate (O5) at (1.2,-0.75);\n\\draw[thick] (O1) -- node[midway] {$>$} node[midway,above=2.3] {$e\\in\\partial\\Gamma$} node[midway,below=2.3] {$j_e,m_e$} (O2);\n\\draw (O2) node[scale=0.7] {$\\bullet$};\n\\draw[thick,in=+180,out=+90,scale=3,rotate=0] (O2) to (O3);\n\\draw[thick] (O2) -- (O4);\n\\draw[thick,in=+180,out=-90,scale=3,rotate=0] (O2) to (O5);\n\n\\coordinate (O6) at (2.25,0.75);\n\\coordinate (O7) at (2.25,0);\n\\coordinate (O8) at (2.25,-0.75);\n\\coordinate (O9) at (3,0);\n\\coordinate (O10) at (4.25,0);\n\\draw[thick] (O10) -- node[midway] {$<$}\nnode[midway,below=2.3] {$j_e,\\tilde{m}_e$} (O9);\n\\draw (O2) node[scale=0.7] {$\\bullet$};\n\\draw (O9) node[scale=0.7] {$\\bullet$};\n\\draw[thick,in=+90,out=+0,scale=3,rotate=0] (O6) to (O9);\n\\draw[thick] (O7) -- (O9);\n\\draw[thick,in=-90,out=+0,scale=3,rotate=0] (O8) to (O9);\n\n\\draw[dashed] (O3) -- (O6);\n\\draw[dashed] (O4) -- (O7);\n\\draw[dashed] (O5) -- (O8);\n\n\\node at (-1,-1.5) {$=$};\n\n\\coordinate (O11) at (-0.5,-1.5);\n\n\\draw[thick] (O11) -- node[midway] {$>$} node[midway,below=2.3] {$j_e,m_e$} ++ (1.25,0) node[scale=0.7] {$\\bullet$} -- node[midway] {$<$} node[midway,below=2.3] {$j_e,\\tilde{m}_e$} ++ (1.25,0) ;\n\n\t\\end{tikzpicture}\n\\end{subfigure}\n\\hspace{1cm}\n\\begin{subfigure}[h]{0.4\\linewidth}\n\\begin{tikzpicture}\n\\coordinate (A1) at (0,0);\n\\coordinate (A2) at (0.75,0.5);\n\\coordinate (A3) at (0.75,0);\n\\coordinate (A4) at (0.75,-0.5);\n\n\\draw[thick,in=+180,out=+60,scale=3,rotate=0] (A1) to (A2);\n\\draw[thick] (A1) -- (A3);\n\\draw[thick,in=+180,out=-60,scale=3,rotate=0] (A1) to (A4);\n\n\\draw[thick] (A1) -- ++ (135:1)\n\\draw[thick] (A1) -- ++ (165:1)\n\\draw[thick] (A1) -- ++ (195:1)\n\\draw[thick] (A1) -- ++ (225:1)\n\n\\coordinate (B1) at (2.55,0);\n\\coordinate (B2) at (1.8,0.5);\n\\coordinate (B3) at (1.8,0);\n\\coordinate (B4) at (1.8,-0.5);\n\n\\draw[thick] (B1) -- ++ (45:1)\n\\draw[thick] (B1) -- ++ (15:1)\n\\draw[thick] (B1) -- ++ (-15:1)\n\\draw[thick] (B1) -- ++ (-45:1)\n\n\\draw[thick,in=0,out=120,scale=3,rotate=0] (B1) to (B2);\n\\draw[thick] (B1) -- (B3);\n\\draw[thick,in=0,out=240,scale=3,rotate=0] (B1) to (B4);\n\n\\draw[dashed] (A2) -- (B2);\n\\draw[dashed] (A3) -- (B3);\n\\draw[dashed] (A4) -- (B4);\n\n\\draw (A1) node[scale=0.7] {$\\bullet$} node[above=5] {$v$};\n\\draw (B1) node[scale=0.7] {$\\bullet$} node[above=5] {$v$};\n\n\\node at (-0.8,-2) {$ \\propto \\; \\displaystyle{ \\sum_{J} \\, C_{I_0}[J] }$};\n\n\\coordinate (C) at (-0.5,-2);\n\\coordinate (D) at (1.5,-2);\n\\coordinate (E) at (2.5,-2);\n\n\\draw[thick] (D) -- node[midway,above] {$J$} (E) ;\n\n\\draw[thick] (D) -- ++ (135:1);\n\\draw[thick] (D) -- ++ (165:1);\n\\draw[thick] (D) -- ++ (195:1);\n\\draw[thick] (D) -- ++ (225:1);\n\n\\draw[thick] (E) -- ++ (45:1);\n\\draw[thick] (E) -- ++ (15:1);\n\\draw[thick] (E) -- ++ (-15:1);\n\\draw[thick] (E) -- ++ (-45:1);\n\n\\draw (D) node[scale=0.7] {$\\bullet$};\n\\draw (E) node[scale=0.7] {$\\bullet$};\n\n\n\\end{tikzpicture}\n\\end{subfigure}\n\n\t\\caption\n\tBoundary vertex contribution to the boundary density matrix from the self-gluing of intertwiners: single boundary edge vs many boundary edges.}\n\t\\label{fig:boundaryvertex}\n\\end{figure}\n\n\nSpin network basis states are actually very peculiar and are a very special case for the bulk quantum geometry. They are eigenstates for geometrical observables, such as areas and volumes, but there are not coherent states with minimal spread on both connection and triad (i.e. on parallel transport and metric) and they do not commute with the Hamiltonian constraints. More generally, physically relevant states will be superposition of such spin network basis states, thus superposition of spins and intertwiners, leading to correlation and entanglement between bulk vertices, in which case the boundary density matrix will become non-trivial.\nBefore analyzing in more details the structure of the boundary density matrix, let us underline the main two features of the boundary state as compared to the bulk state:\n\\begin{itemize}\n\n\\item The boundary state $\\rho_{\\partial\\Gamma}$ is typically mixed even if the bulk spin network state is pure \\cite{Livine:2006xk,Bianchi:2013toa}.\nThus a coarse-graining procedure trading the bulk states for the boundary state irremediably creates entropy. In particular, endowing the bulk states with a unitary dynamics would naturally lead to a decoherence process (and possibly re-coherence) for the boundary states \\cite{Feller:2016zuk,Feller:2017ejs}.\n\n\\item The boundary state $\\rho_{\\partial\\Gamma}$ does not decompose onto intertwiners between the boundary spins, even though the bulk spin network is made out of individual intertwiners, as pointed out in \\cite{Livine:2006xk,Livine:2013gna,Livine:2017xww}.\nIndeed, the density matrix is invariant under the action by conjugation of the $\\mathrm{SU}(2)$ group,\n\\begin{equation}\n\\forall h\\in\\mathrm{SU}(2)\\,,\\quad\n\\langle \\{\\tilde{k}_{e},\\tilde{m}_e\\}\\,|\n\\, h^{-1}\\rho_{\\partial\\Gamma}[\\psi]\nh\\,| \\,\\{k_{e},m_e\\}\\rangle\n=\n\\langle \\{\\tilde{k}_{e},\\tilde{m}_e\\}\\,|\n\\rho_{\\partial\\Gamma}[\\psi]\n| \\{k_{e},m_e\\}\\rangle\n\\end{equation}\nwhere the $\\mathrm{SU}(2)$ transformation acts simultaneously on all the boundary edges. It is however not invariant under gauge transformations acting on left or right, $\\rho_{\\partial\\Gamma}\\mapsto h^{-1}\\rho_{\\partial\\Gamma}$ or $\\rho_{\\partial\\Gamma}\\mapsto \\rho_{\\partial\\Gamma}h$.\nThis means that the total spin of the boundary state does not vanish. In fact the boundary defines an intertwiner between the two copies of the wave-function, the bra $\\langle \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})|$ and the ket $|{\\psi}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})\\rangle$, as illustrated on fig.\\ref{fig:densitymatrix}. The recoupled spin $J$ between the boundary edges defines the overall channel between the bra and the ket. This total spin of the boundary state is called the {\\it closure defect} (since the $\\mathrm{SU}(2)$ gauge invariance is enforced by the closure constraint, which is a discretization of the Gauss law of the first order formulation of general relativity) \\cite{Livine:2013gna,Livine:2019cvi}. The $J=0$ component is the component with vanishing total boundary spin - in usual jargon, the intertwiner component. It represents the ``closed'' or flat component, while the components with $J\\ne 0$ can be interpreted as bulk curvature.\nFrom the viewpoint of coarse-graining, it reflects that curvature builds up when gluing flat blocks -the intertwiners- together \\cite{Livine:2013gna}. The gauge symmetry breaking at the boundary, due to allowing $J\\ne 0$, can be also be understood as responsible for the entropy of isolated horizon (and thus black holes) in the loop quantum gravity framework \\cite{Donnelly:2008vx,Donnelly:2011hn,Livine:2017fgq} (see \\cite{Donnelly:2014gva,Donnelly:2016auv} for a more general discussion of gauge symmetry and symmetry breaking on the boundary of gauge field theories).\nAt the end of the day, the closure defect, or total spin, provides a very useful basis to study the structure of boundary states and of induced boundary density matrices.\n\n\\end{itemize}\n\n\\subsection{The closure defect basis and $\\mathrm{SU}(2)$-invariance of the boundary density matrix}\n\nWe would like to introduce the closure defect basis for boundary states, which amounts to decompose them according to the total boundary spin.\nAssuming that the boundary edges are all incoming (or all outgoing) to simplify the orientation conventions, we recouple all the boundary spins $j_{e}$ into their total spin $J$:\n\\begin{equation}\n{\\mathcal H}^{\\partial}_{\\Gamma}\n=\n\\bigoplus_{ \\{j_e\\}_{e\\in\\partial\\Gamma} }\\bigotimes_{e} {\\mathcal V}_{j_{e}}\n=\n\\bigoplus_{ \\{j_e\\}_{e\\in\\partial\\Gamma} }\\bigoplus_{J} {\\mathcal V}_J \\otimes {\\mathcal N}_J^{\\{j_{e}\\}}\\,,\n\\end{equation}\nwhere the multiplicity spaces (or degeneracy spaces) ${\\mathcal N}_J^{\\{j_{e}\\}}$ consist in the spaces of intertwiners (i.e. $\\mathrm{SU}(2)$-invariant states) in the tensor product of the total spin Hilbert space ${\\mathcal V}^{J}$ with the individual spins $\\bigotimes_{e} {\\mathcal V}_{j_{e}}$,\n\\begin{equation}\n{\\mathcal N}_J^{\\{j_{e}\\}}\n:=\n\\textrm{Inv}_{\\mathrm{SU}(2)}\\left[\n{\\mathcal V}_J\\otimes\\bigotimes_{e\\in\\partial\\Gamma} {\\mathcal V}_{j_{e}}\n\\right]\n\\,.\n\\end{equation}\nHere, due to the bulk spin network structure, the total spin $J$ is necessary an integer.\nInstead of the decoupled basis $|\\{j_{e},m_{e}\\}_{e}\\rangle$, we use the recoupled basis, as illustrated on fig.\\ref{fig:recoupledbasis}:\n\\begin{equation}\n{\\mathcal H}^{\\partial}_{\\Gamma}\n=\n\\bigoplus_{ \\{j_e\\}_{e\\in\\partial\\Gamma} }\\bigoplus_{J,M}\\bigoplus_{I^{(J,\\{j_{e}\\})}}\n{\\mathbb C}|J,M\\rangle\\otimes|(J,\\{j_{e}\\}), I\\rangle\n=\n\\bigoplus_{J,M}\\bigoplus_{ \\{j_e\\} }\\bigoplus_{I^{(J,\\{j_{e}\\})}}\n{\\mathbb C}|J,M\\rangle\\otimes|(J,\\{j_{e}\\}), I\\rangle\n\\,,\n\\end{equation}\nwhere the $I^{(J,\\{j_{e}\\})}=|(J,\\{j_{e}\\}), I\\rangle$'s are a basis of intertwiners in the multiplicity space ${\\mathcal N}_J^{\\{j_{e}\\}}$ . We might write $I^{(J)}$ instead of $I^{(J,\\{j_{e}\\})}$ whenever we don't need to explicitly specify the value of the boundary spins.\nThese intertwiner states not only encode the recoupled total spin $J$, but also how the individual spins $j_{e}$ are weaved together.\n\\begin{figure}[hbt!]\n\\centering\n\\vspace*{5mm}\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (O) at (-4.2,0);\n\n\\draw[thick] (O) -- ++ (0.5,0);\n\\draw[thick] (O) ++ (0,1) -- ++ (0.5,0);\n\\draw[thick] (O) ++ (0,0.5) -- ++ (0.5,0);\n\\draw[thick,loosely dotted] (O) ++ (0.25,-0.15) -- ++ (0,-0.9);\n\\draw[thick] (O) ++ (0,-1) -- ++ (0.5,0);\n\n\\coordinate (O1) at (-1,0);\n\n\\draw (O1)++(3,0) node {$\\sim$};\n\n\\draw (O1) node { $ | \\{ j_e, m_e \\}_{e\\in\\partial\\Gamma} \\rangle \\in{\\mathcal H}_{\\partial\\Gamma} $};\n\n\\coordinate (A1) at (5,0);\n\\coordinate (A2) at (4,1);\n\\coordinate (A3) at (4,0.5);\n\\coordinate (A4) at (4,0);\n\\coordinate (A5) at (4,-1);\n\n\\draw[thick] (A1) -- (A2) -- ++ (-0.5,0);\n\\draw[thick] (A1) -- (A3) -- ++ (-0.5,0);\n\\draw[thick] (A1) -- (A4) -- ++ (-0.5,0)\n\\draw[thick] (A1) -- (A5) -- ++ (-0.5,0);\n\n\\draw[thick,loosely dotted] (3.8,-0.15) -- (3.8,-0.9);\n\n\\draw (A1) node[scale=0.7] {$\\bullet$} node[above=2] {$I$};\n\n\\draw[thick] (A1) -- ++ (1.2,0) node[below] {$J,M$} ++ (3.5,-0.5) node {$\\underbrace{ | J,M \\rangle }_{ \n\\overset{ \\in }\n{ \\phantom{ \\big( } {\\mathcal V}_J \\phantom{\\big)} }\n } \n { \\phantom{ \\big( } \\otimes }\n \\underbrace{ | (J,\\{j_e\\}),I \\rangle }_{ \n \\overset{ \\in }\n {\n \\phantom{ \\big( } \\textrm{Inv} \\left( {\\mathcal V}_J \\otimes \\bigotimes_{e}{\\mathcal V}_{j_e} \\right) \\phantom{ \\big) }\n }\n } $};\n\n\n\\end{tikzpicture}\n\\caption\nRecoupled basis for boundary states in terms of the total boundary spin (or closure defect) $J$.}\n\\label{fig:recoupledbasis}\n\\end{figure}\n\n\nIn the framework of the coarse-graining of spin networks introduced in \\cite{Charles:2016xwc}, the total spin $J$ is the tag and the multiplicity states $I\\in{\\mathcal N}_J^{\\{j_{e}\\}}$ are tagged intertwiners.\nFrom a physical standpoint, the multiplicity spaces ${\\mathcal N}_J^{\\{j_{e}\\}}$ for spin recoupling give the black hole horizon micro-states in a na\\\"ive leading order approach to black hole (micro-canonical) entropy and holography in loop quantum gravity, e.g. \\cite{Ashtekar:1997yu,Domagala:2004jt,Livine:2005mw,Agullo:2009eq,Livine:2012cv,Asin:2014gta}.\n\nLet us focus on the case with fixed boundary spins $\\{j_{e}\\}$, although this is a mere alleviation of the notations, since the spins $j_{e}$ can be implicitly absorbed in the definition of the recoupling intertwiner $I^{(J,\\{j_{e}\\})}$. The bulk wave-function evaluated on bulk holonomies is a boundary state and can thus be decomposed onto the recoupled basis:\n\\begin{equation} \\label{eq:Bulk-BoundaryGeneticForm}\n| \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\n=\n\\sum_{J}\\sum_{M}\\sum_{I^{(J)}} C_{JMI}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) |J,M\\rangle \\otimes |J,I^{(J)}\\rangle\n\\,,\n\\end{equation}\nwhere the coefficients $C_{JMI}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}})$ reflect the internal bulk structure of the wave-functions and depend on the bulk spins and intertwiners.\n$\\mathrm{SU}(2)$ gauge transformation act non-trivially on the wave-function by the group action on the boundary spins. Now, as we have seen earlier, the density matrix $\\rho_{\\partial}=\\int \\mathrm{d} g_{e}\\,|\\psi(g_{e})\\rangle\\langle \\psi(g_{e})|$ is invariant under conjugation by the simultaneous $\\mathrm{SU}(2)$ action on all the boundary spins $\\bigotimes_{e} D^{j_{e}}(h)$. This is a direct consequence of the bulk $\\mathrm{SU}(2)$ gauge invariance,\n\\begin{eqnarray}\n\\rho_{\\partial\\Gamma}[\\psi]\n&=&\n\\int \\prod_{e} \\mathrm{d} g_{e} \\, | \\psi(\\{ g_{e} \\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\\langle (\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}} ) |\n=\n\\int \\prod_{e} \\mathrm{d} g_{e} \\, | \\psi(\\{ h \\, g_{e} \\, h^{-1} \\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\\langle (\\{h \\, g_{e} \\, h^{-1}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) |\n\\\\\n&=&\n\\int \\prod_{e} \\mathrm{d} g_{e} \\, h | \\psi(\\{ g_{e} \\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\\langle (\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) | h^{-1}\n=h \\, \\rho_{\\partial\\Gamma}[\\psi] \\, h^{-1}\n\\,, \\qquad\n\\forall\\, h \\in \\mathrm{SU}(2)\n\\,.\n\\end{eqnarray}\nThis $\\mathrm{SU}(2)$ action on the boundary boils down to the $\\mathrm{SU}(2)$ action on the recoupled spin $D^{J}(h)$ and does not touch the multiplicity sector,\n\\begin{equation} \\label{eq:BoundarySU(2)action}\nh \\triangleright | \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle\n=\n\\sum_{J}\\sum_{M,N}\\sum_{I^{(J)}} C_{JMI}(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\, D^J_{\\tilde{M}M}(h) \\, |J,\\tilde{M}\\rangle \\otimes |J,I\\rangle\n\\,.\n\\end{equation}\nThis means that the invariance of the boundary density matrix, $h\\,\\rho_{\\partial}\\,h^{\\dagger}=\\rho_{\\partial}$ for all group elements $h\\in\\mathrm{SU}(2)$ implies it is necessarily totally mixed on each subspace at fixed total spin $J$ and that all the information is encoded in the multiplicity subspaces. This is expressed more precisely by the following lemma:\n\n\\begin{lemma}\nA normalized $\\mathrm{SU}(2)$-invariant density matrix $\\rho$, thus satisfying $h\\,\\rho\\,h^{\\dagger}=\\rho\\,, \\forall\\, h\\in \\mathrm{SU}(2)$, has the following form:\n\\begin{equation} \\label{eq:SU(2)-invariant}\n\\rho\n=\n\\bigoplus_{J} p(J) \\frac{ \\mathbb{I}_{{\\mathcal V}_J} }{2J+1} \\otimes \\rho_{{\\mathcal N}_{J}}\n\\,, \\qquad\n{\\mathrm{Tr}} \\rho_{{\\mathcal N}_{J}}=1\\,,\\forall J\\in{\\mathbb N}\n\\,,\\qquad\n{\\mathrm{Tr}} \\rho=\\sum_{J}p(J)=1\n\\,.\n\\end{equation}\nThe coefficients $p(J)$ define the probability distribution over the total spin $J$. The operator $\\mathbb{I}_{{\\mathcal V}_J}=\\sum_{M}| J,M\\rangle\\langle J,M|$ is the identity on ${\\mathcal V}_J$ and $\\rho_{{\\mathcal N}_{J}}$ is an arbitrary density matrix in the multiplicity space ${\\mathcal N}_{J}$.\n\\end{lemma}\nThe $\\mathrm{SU}(2)$ invariance is a key property of the boundary density matrix, which descends directly from the gauge invariance of the bulk wave-functions under local $\\mathrm{SU}(2)$ transformations.\nLet us stress the important point that this is a statistical invariance under the $\\mathrm{SU}(2)$ action, at the level of the density matrix. This does not amount to the invariance of pure quantum states on the boundary. Indeed strict $\\mathrm{SU}(2)$ invariance of the wave-function (i.e. $h\\,\\rho=\\rho\\,h^{\\dagger}=\\rho$) would require $J=0$, while we can have here an arbitrary distribution over all (allowed) values of the total spin $J$.\n\n\n\n\\subsection{Universal bulk reconstruction from the boundary density matrix}\n\nThe natural question is how much can we know about the bulk structure from the boundary density matrix. For instance, does the combinatorial structure of the bulk graph deeply affect the type of boundary density matrix one gets? Here, we show a universal reconstruction procedure. As hinted by the work in \\cite{Livine:2017xww}, a single bulk loop is enough to get arbitrary boundary density matrices. More precisely, any arbitrary $\\mathrm{SU}(2)$-invariant density matrix on the boundary Hilbert space can be induced from a pure bulk state on the single loop bulk graph. We prove this powerful result below. This can be understood as a boundary-to-bulk purification theorem.\n\n\\begin{prop} \\label{prop:BoundaryDensityMatrix}\nA mixed state $\\rho$ on the boundary Hilbert space ${\\mathcal H}_{\\partial}$ is $\\mathrm{SU}(2)$-invariant, $h\\,\\rho\\,h^{\\dagger}=\\rho$, if and only if it is an induced boundary density matrix (IBDM) from a pure (gauge-invariant) bulk state $| \\psi(\\{g_{e}\\}_{e\\in\\overset{\\circ}{\\Gamma}}) \\rangle$ for some bulk graph $\\overset{\\circ}{\\Gamma}$ connecting the boundary edges.\n\\end{prop}\n\\begin{proof}\nWe already know that induced boundary density matrices are $\\mathrm{SU}(2)$-invariant. We have to show the reverse statement. Let us consider an arbitrary $\\mathrm{SU}(2)$-invariant density matrix, \n\\begin{equation}\n\\rho\n=\n\\bigoplus_{J} p(J) \\frac{ \\mathbb{I}_{{\\mathcal V}_J} }{2J+1} \\otimes \\rho_{{\\mathcal N}_{J}}\n\\,,\\nonumber\n\\end{equation}\nand let us diagonalize the density matrices for each multiplicity subsector,\n\\begin{equation}\n\\label{eq:SU(2)-inv.DM}\n\\rho_{{\\mathcal N}_J}=\\sum_{r=1}^{R_{J}} W_{I_{r}}^{(J)} \\,| J, I_{r}^{(J)} \\rangle\\langle J, I_{r}^{(J)} |\n\\,,\n\\end{equation}\nwhere $R_{J}$ is the rank of $\\rho_{{\\mathcal N}_J}$ and the intertwiners $I_{r}^{(J)}$ are orthonormal states in the multiplicity space ${\\mathcal N}_J$.\n\n\n\nLet us consider the bulk graph, as \\cite{Livine:2017xww}, with a single vertex tying all the boundary edges to a single loop as drawn on fig \\ref{fig:LoopySpinNetwork}.\nThen a spin network state is a superposition of intertwiners between the boundary spins and the (pair of) spin(s) carried by the loop. We can unfold this intertwiner with a (virtual) link between the boundary edge and the loop. This (virtual) intermediate link carries the total boundary spin $J$. For each value of $J$, we need to specify the spin $k$ carried by the loop and the two intertwiners at the nodes. The three-valent intertwiner recoupling the loop spin $k$ to the total spin $J$ is unique (when it exists), while the intertwiner recoupling the boundary spins $\\{j_{e}\\}$ into $J$ will naturally be the intertwiners $I_{r}^{(J)}$.\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\n\n\n\n\\coordinate (O) at (0,0);\n \n\\coordinate (A) at (-6,0);\n\n\\draw (A) node {$\\rho_{\\partial}$};\n\n\\draw [domain=0:360] plot ({-6+1.75 * cos(\\x)}, {1.75 * sin(\\x)});\n \n\\draw[thick] (A) ++(-0.6,0)\n\\draw[thick] (A) ++(0:1) --++ (0:1.5)\n\\draw[thick] (A) ++(60:1) --++ (60:1.5) ++(60:0.35) node {$j_1$};\n\\draw[thick] (A) ++(120:1) --++ (120:1.5) ++(120:0.35) node {$j_2$};\n\\draw[thick] (A) ++(180:1) --++ (180:1.5) ++(180:0.35) node {$j_3$};\n\\draw[thick] (A) ++(240:1) --++ (240:1.5)\n\\draw[thick] (A) ++(300:1) --++ (300:1.5)\n\n\\draw [thick, loosely dotted,domain=195:230] plot ({-6+2.6 * cos(\\x)}, {2.6 * sin(\\x)});\n\n\n\\coordinate (O) at (3.5,0);\n\n\\draw[->,>=stealth,very thick] (-2,0) -- node[above] {?} (0,0);\n\n\\draw[thick,red] (O) -- ++ (315:1.5) node[very near end,above=2] {$J$} coordinate (B) node[blue,scale=0.7] {$\\bullet$};\n\\draw[blue,thick,in=-90,out=0,scale=4.5,rotate=0] (B) to[loop] node[near start,sloped] {$>$} node[near end,left=2] {$k$} (B) ++(315:0.35) node {$g$};\n\n\\draw[thick] (O) -- ++ (0:1.5)\n\\draw[thick] (O) -- ++ (45:1.5) ++(45:0.35) node {$j_1$};\n\\draw[thick] (O) -- ++ (90:1.5) ++(90:0.35) node {$j_2$};\n\\draw[thick] (O) -- ++ (135:1.5) ++(135:0.35) node {$j_3$};\n\\draw[thick] (O) -- ++ (180:1.5);\n\\draw[thick] (O) -- ++ (225:1.5);\n\n\\draw (O) node[scale=0.7] {$\\bullet$};\n\n\\draw [thick, loosely dotted,domain=160:200] plot ({3.5+1.8 * cos(\\x)}, {1.8 * sin(\\x)});\n\n\\end{tikzpicture}\n\\caption{\nThe universal reconstruction procedure purifying a $\\mathrm{SU}(2)$-invariant boundary density matrix into a pure spin network superposition for a bulk made of a single vertex and single loop.\n}\n\\label{fig:LoopySpinNetwork}\n\\end{figure}\n\n\nIndeed, for each value of the total spin $J$, we choose $R_{J}$ distinct spins $k_{r}^{(J)}$ for the loop with $J\\leq 2k_{r(J)}$, so that ${\\mathcal V}_{J}\\subset {\\mathcal V}_{k}\\otimes {\\mathcal V}_{k}$, i.e. the loop spin can recouple to $J$, i.e. there exists a 3-valent intertwiner (given by the corresponding Clebsh-Gordan coefficients). We then define the following pure spin network for the $1$-loop graph, in terms of a single bulk holonomy $g$ on the loop,\n\\begin{equation}\n| \\psi(g) \\rangle\n=\n\\sum_{J,M} \\sqrt{ p(J) } | J,M \\rangle \\otimes \\sum_{r}^{R_{J}} \\sum_{m,n}^{k_{r}^{(J)}} (-1)^{k_{r}^{(J)}+m} \\, \\sqrt{ 2k_{r}^{(J)}+1 } \\, D^{k_{r}^{(J)}}_{nm}(g) \\, \\begin{pmatrix}\nJ & k_{r}^{(J)}& k_{r}^{(J)} \\\\\nM & -m & n\n\\end{pmatrix} \\, \\sqrt{ W_{I_{r}}^{(J)} } \\, | J, I_{r}^{(J)} \\rangle\n\\,.\n\\end{equation}\nIt is straightforward to check this boundary pure state leads back to the wanted $\\mathrm{SU}(2)$-invariant density matrix \\eqref{eq:SU(2)-inv.DM} upon integration over the bulk holonomy $g$.\n\n\\end{proof}\nIt is quite remarkable that the superposition of loop spins and bulk intertwiners naturally leads to mixed boundary density matrices. \n\n\n\\subsection{Probing the first layer of the bulk: Bouquets of boundary edges}\n\\label{sec:manyedges}\n\n\n\nUp to now, we have defined the boundary density matrix induced by a bulk spin network state, underlined the fact that the resulting boundary density matrix is typically mixed for a pure spin network state and showed how to construct such a pure bulk state on a graph with at least one loop given a (suitably gauge invariant) boundary density matrix. This universal reconstruction procedure, given above in the proof of Proposition \\ref{prop:BoundaryDensityMatrix}, with a bulk graph made of a single vertex and a single bulk loop, should be understood as a purification of the boundary density matrix into a bulk state. There are nevertheless many possible bulk states on possibly complicated graphs inducing the same boundary state, leading to many ways to purify a given mixed boundary state. In light of this fact, we wish to understand better how the bulk graph structure and potential correlations between the spins and intertwiners within the bulk possibly get reflected in the boundary density matrix.\n\n\n\nIn this section, we would like to start diving into the bulk, or at least start probing the first layer of the bulk beyond the boundary edges. More precisely, we would like to see the ``boundary vertices'', i.e. the vertices to which are attached boundary edges, and understand if a finer study of the boundary density matrix allows to extract the information about whether bunches of boundary edges are attached to the same boundary vertex.\nIndeed, although a rather natural assumption is that each boundary edge to connected to a different vertex in the bulk, this is not a generic configuration. A more general configuration involves boundary edges regrouped into bouquets, each attached to a vertex, as illustrated on fig.\\ref{fig:bouquet}.\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\\coordinate (O) at (0,0);\n \n\\draw (O) ++(0.6,0)\n\\coordinate (O1) at (60:1);\n\\coordinate (O2) at (120:1);\n\\coordinate (O3) at (240:1);\n\\coordinate (O4) at (300:1);\n\n \n \\draw [domain=0:360] plot ({cos(\\x)}, {sin(\\x)});\n\n\\draw (O1) -- ++ (60:0.7) node [right,midway] {$J_1$};\n\\draw (O2) -- ++ (120:0.7) node [left,midway] {$J_2$};\n\\draw (O3) -- ++ (240:0.7) node [left,midway] {$J_3$};\n\\draw (O4) -- ++ (300:0.7) node [right,midway] {$J_4$};\n\n\\draw (O1) ++ (60:0.7) -- ++ (0:0.7);\n\\draw (O1) ++ (60:0.7) -- ++ (45:0.7);\n\\draw (O1) ++ (60:0.7) node[scale=0.7,blue] {$\\bullet$} -- ++ (90:0.7);\n\n\\draw (O2) ++ (120:0.7) -- ++ (90:0.7);\n\\draw (O2) ++ (120:0.7) -- ++ (135:0.7);\n\\draw (O2) ++ (120:0.7) node[scale=0.7,blue] {$\\bullet$} -- ++ (180:0.7);\n\n\\draw (O3) ++ (240:0.7) -- ++ (180:0.7);\n\\draw (O3) ++ (240:0.7) -- ++ (225:0.7);\n\\draw (O3) ++ (240:0.7) node[scale=0.7,blue] {$\\bullet$} -- ++ (270:0.7);\n\n\\draw (O4) ++ (300:0.7) -- ++ (270:0.7);\n\\draw (O4) ++ (300:0.7) -- ++ (315:0.7);\n\\draw (O4) ++ (300:0.7) node[scale=0.7,blue] {$\\bullet$} -- ++ (0:0.7);\n\n\\draw[->,>=stealth,very thick] (-4,0) -- (-2,0);\n\\coordinate (A) at (-6,0);\n \\draw [domain=0:360] plot ({-6+cos(\\x)}, {sin(\\x)});\n \n\\draw (A) ++(-0.6,0)\n\\path (A) ++(60:1) coordinate (A1);\n\\path (A) ++(120:1) coordinate (A2);\n\\path (A) ++(240:1) coordinate (A3);\n\\path (A) ++(300:1) coordinate (A4);\n\n\\draw (A1) node[scale=0.7] {$\\bullet$};\n\\draw (A2) node[scale=0.7] {$\\bullet$};\n\\draw (A3) node[scale=0.7] {$\\bullet$};\n\\draw (A4) node[scale=0.7] {$\\bullet$};\n\n\\draw (A1) -- ++ (0:0.7);\n\\draw (A1) -- ++ (45:0.7);\n\\draw (A1) -- ++ (90:0.7);\n\n\\draw (A2) -- ++ (90:0.7);\n\\draw (A2) -- ++ (135:0.7);\n\\draw (A2) -- ++ (180:0.7);\n\n\\draw (A3) -- ++ (180:0.7);\n\\draw (A3) -- ++ (225:0.7);\n\\draw (A3) -- ++ (270:0.7);\n\n\\draw (A4) -- ++ (270:0.7);\n\\draw (A4) -- ++ (315:0.7);\n\\draw (A4) -- ++ (0:0.7);\n\n\\draw (O1) node[scale=0.7,blue] {$\\bullet$};\n\\draw (O2) node[scale=0.7,blue] {$\\bullet$};\n\\draw (O3) node[scale=0.7,blue] {$\\bullet$};\n\\draw (O4) node[scale=0.7,blue] {$\\bullet$};\n\n\\end{tikzpicture}\n\\caption{\nBouquets of boundary edges attached to boundary vertices $v\\in V^{\\partial}$ and the chicken feet basis labeled by the recoupled spin $J_{v}$ for each bouquet.}\n\\label{fig:bouquet}\n\\end{figure}\n\nThis leads us to introduce a ``chicken feet'' basis where we recouple the spin of the boundary edges of each bouquet separately instead of only considering the total recoupled spin $J$. We thus introduce the bouquet spin $J_{v}$ for each boundary vertex $v$. Writing $V^{\\partial}$ for the set of boundary vertices, the boundary Hilbert space decomposes as:\n\\begin{equation}\n{\\mathcal H}_{\\partial}=\\bigoplus_{\\{j_{e}\\}_{e\\in\\partial}}\\bigotimes_{e\\in\\partial}{\\mathcal V}_{j_{e}}\n=\\bigoplus_{\\{J_{v}\\}_{v\\in V^{\\partial}}} \\bigotimes_{v\\in V^{\\partial}}{\\mathcal V}_{J_{v}}\\otimes {\\mathcal N}_{\\{J_{v}\\}}\n\\,,\n\\end{equation}\n\\begin{equation}\n{\\mathcal N}_{\\{J_{v}\\}}\n=\n\\bigotimes_{v\\in V^{\\partial}}\\bigoplus_{\\{j_{e}\\}_{e\\,| v\\in\\partial e}}\\textrm{Inv}\\Big{[}\n{\\mathcal V}_{J_{v}}\\otimes \\bigotimes_{e\\,| v\\in\\partial e} {\\mathcal V}_{j_{e}}\n\\Big{]}\n\\,,\n\\end{equation}\nleading to the chicken feet basis states $|\\{J_{v}\\}_{v\\in V^{\\partial}},\\{j_{e}\\}_{e\\in\\partial},\\{{\\mathcal I}^{J_{v}}_{\\{j_{e}\\}}\\}_{v\\in V^{\\partial}}\\rangle$ labelled by the boundary edge spins $j_{e}$, the boundary bouquet spins $J_{v}$ and the intertwiners recoupling them,\nas depicted on fig.\\ref{fig:bouquet}.\n\nAs for the bulk, we similarly unfold the intertwiner states living on the boundary vertices and decompose them into two intertwiners, one ``boundary'' component which recouples all the boundary spins into $J_{v}$ and one ``bulk'' component which recouples the spins on the remaining bulk edges attached to the vertex to $J_{v}$, as illustrated on fig.\\ref{fig:boundaryintertwiner}.\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (O) at (0,0);\n\n\\coordinate (A1) at (145:2.1);\n\\coordinate (A2) at (215:2.1);\n\n\\coordinate (B1) at (35:2.1);\n\\coordinate (B2) at (-35:2.1);\n\n\\draw (O) node[scale=0.7] {$\\bullet$} ++ (0,0.55) node {$I_v$} ++ (0,-1.5) node{$v\\in V^{\\partial}$ };\n\n\\draw[thick] (O) -- ++ (130:1.5) ;\n\\draw[thick] (O) -- ++ (160:1.5) ;\n\\draw[thick] (O) -- ++ (190:1.5) ;\n\\draw[thick] (O) -- ++ (220:1.5) ;\n\n\\draw[thick] (O) -- ++ (50:1.5) ;\n\\draw[thick] (O) -- ++ (20:1.5) ;\n\\draw[thick] (O) -- ++ (-10:1.5) ;\n\\draw[thick] (O) -- ++ (-40:1.5) ;\n\n\\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt]\n(A2) -- (A1) node [black,midway,xshift=-1cm] {\\footnotesize $j_{e} \\in \\partial\\Gamma$};\n\n\\draw [decorate,decoration={brace,amplitude=10pt,mirror},xshift=-4pt,yshift=0pt]\n(B2) -- (B1) node [black,midway,xshift=1cm] {\\footnotesize $j_{e} \\in \\partial\\overset{\\circ}{\\Gamma}$};\n\n\\coordinate (O1) at (10,0);\n\\coordinate (O2) at (12,0);\n\n\\path (O1) ++(145:2.1) coordinate (C1);\n\\path (O1) ++(215:2.1) coordinate (C2);\n\\path (O2) ++(35:2.1) coordinate (D1);\n\\path (O2) ++(-35:2.1) coordinate (D2);\n\n\\draw[thick,red] (O1) -- node[above] {$J_v$} (O2);\n\n\\draw [decorate,decoration={brace,amplitude=10pt},xshift=-4pt,yshift=0pt]\n(C2) -- (C1) node [black,midway,xshift=-1cm] {\\footnotesize $j_{e} \\in \\partial\\Gamma$};\n\n\\draw [decorate,decoration={brace,amplitude=10pt,mirror},xshift=-4pt,yshift=0pt]\n(D2) -- (D1) node [black,midway,xshift=1cm] {\\footnotesize $j_{e} \\in \\partial\\overset{\\circ}{\\Gamma}$};\n\n\\draw[thick] (O1) -- ++ (130:1.5) ;\n\\draw[thick] (O1) -- ++ (160:1.5) ;\n\\draw[thick] (O1) -- ++ (190:1.5) ;\n\\draw[thick] (O1) node[RoyalPurple,scale=0.7] {$\\bullet$} -- ++ (220:1.5) ;\n\n\\draw[thick] (O2) -- ++ (50:1.5) ;\n\\draw[thick] (O2) -- ++ (20:1.5) ;\n\\draw[thick] (O2) -- ++ (-10:1.5) ;\n\\draw[thick] (O2) node[RoyalPurple,scale=0.7] {$\\bullet$} -- ++ (-40:1.5) ;\n\n\\draw[RoyalPurple] (O1) ++ (0,-1) node{${}^{\\pp}I_{v}^{(J_v)} $ };\n\\draw[RoyalPurple] (O2) ++ (0,-1) node{${}^{o}I_{v}^{(J_v)} $ };\n\n\\end{tikzpicture}\n\\caption\nUnfolding intertwiners on boundary vertices: the decomposition into boundary and bulk intertwiner components.}\n\\label{fig:boundaryintertwiner}\n\\end{figure}\nDecomposed intertwiner basis states are then labeled by the boundary and bulk spins attached to the (boundary) vertex, the bouquet spin $J_{v}$ and the two boundary and bulk intertwiner, $|\\{j_{e}\\}_{e\\in\\partial},J_{v},{}^{\\pp}I_{v}^{(J_{v})},{}^{o}I_{v}^{(J_{v})}\\rangle$.\n\nThe reconstruction of the first layer of the bulk from the boundary density matrix simply reflects the fact that the boundary component of the intertwiners $I_{v}$ at a vertex attached to some boundary edges matches the boundary intertwiner recoupling the boundary spins to their bouquet spins, i.e. ${\\mathcal I}^{J_{v}}_{\\{j_{e}\\}}={}^{\\pp}I_{v}^{(J_{v})}$ for a boundary vertex $v\\in V^{\\partial}$ and for all values of the bouquet spin $J_{v}$.\nLet us see more precisely how this gets encoded into the boundary density matrix.\n\n\\medskip\n\nTo alleviate the notations, let us fix the spins $j_{e}$ on the boundary edges $e\\in\\partial\\Gamma$, although it is straightforward to allow arbitrary superpositions of the boundary spins.\nIn light of the $\\mathrm{SU}(2)$ gauge transformations at the vertices and the resulting $\\mathrm{SU}(2)$ gauge invariance of the boundary density matrix at each boundary vertex, a boundary density matrix necessarily reads:\n\\begin{equation}\n\\rho_{\\partial}\n=\n\\sum_{\\{J_{v}\\}_{v\\in V^{\\partial}}}\n\\bigotimes_{v\\in V^{\\partial}}\\frac{\\mathbb{I}_{J_{v}}}{(2J_{v}+1)} \\otimes \\rho_{\\{J_{v}\\}}\n\\,,\n\\end{equation}\nwhere, for each value of the bouquet spins $\\{J_{v}\\}$, we have the totally mixed state on the spin states and a possibly non-trivial density matrix $\\rho_{\\{J_{v}\\}}$ on the corresponding multiplicity space,\n\\begin{equation}\n \\rho_{\\{J_{v}\\}}\\in\\textrm{End}[{\\mathcal N}_{\\{J_{v}\\}}]\n \\,,\\qquad\n {\\mathcal N}_{\\{J_{v}\\}}\n=\n\\bigotimes_{v\\in V^{\\partial}}\\textrm{Inv}\\Big{[}\n{\\mathcal V}_{J_{v}}\\otimes \\bigotimes_{e\\,| v\\in\\partial e} {\\mathcal V}_{j_{e}}\n\\Big{]}\n\\,,\n\\end{equation}\nsince we hare working at fixed boundary spins $j_{e\\in\\partial}$.\n\nFor spin network basis state, a straightforward calculation leads to the multiplicity matrices $\\rho_{\\{J_{v}\\}}$ simply given by the boundary components of the intertwiners living at the boundary vertices:\n\\begin{lemma}\nFor a spin network basis states $\\Psi_{\\{j_{e},I_{v}\\}}$ with given spins $j_{e}$ on all bulk and boundary edges, as well as chosen intertwiner states $I_{v}$ at each vertex, we decompose the intertwiner states living on boundary vertices in the bouquet spin basis separating their ``boundary'' component from their ``bulk'' component,\n\\begin{equation}\n\\forall v\\in V^{\\partial}\\,,\\quad\nI_{v}=\\sum_{J_{v}}\nC_{v}(J_{v})\\,\n|J_{v},{}^{\\pp}I_{v}^{(J_{v})},{}^{o}I_{v}^{(J_{v})}\\rangle\\,,\n\\end{equation}\nwith normalized intertwiners ${}^{\\pp}I_{v}^{(J_{v})}$ and ${}^{o}I_{v}^{(J_{v})}$, respectively between the boundary spins and the bouquet spin, then between the bouquet spin and the bulk spins attached to the vertex $v$.\nThen the induced boundary density matrix reads:\n\\begin{equation}\n\\rho_{\\partial}[\\Psi_{\\{j_{e},I_{v}\\}}]\n=\n\\sum_{\\{J_{v}\\}_{v\\in V^{\\partial}}}\n\\bigotimes_{v\\in V^{\\partial}}\\frac{\\mathbb{I}_{J_{v}}}{(2J_{v}+1)} \\otimes\n\\rho_{\\{J_{v}\\}}\\,,\n\\qquad\\textrm{where} \\quad\n|{}^{\\pp}I_{v}^{(J_{v})}\\rangle\\in\n\\mathrm{Inv}\\Big{[}\n{\\mathcal V}_{J_{v}}\\otimes \\bigotimes_{e\\,| v\\in\\partial e} {\\mathcal V}_{j_{e}}\n\\Big{]}\n\\,.\n\\end{equation}\nThe multiplicity matrices $\\rho_{\\{J_{v}\\}}$ have rank-one:\n\\begin{equation}\n\\rho_{\\{J_{v}\\}}\n=\n|\\iota_{\\{J_{v}\\}}\\rangle\\langle \\iota_{\\{J_{v}\\}}|\\,,\\quad\n\\iota_{\\{J_{v}\\}}\n=\n\\bigotimes_{v\\in V^{\\partial}}\nC_{v}(J_{v})\n|{}^{\\pp}I_{v}^{(J_{v})}\\rangle\n\\,\\in{\\mathcal N}_{\\{J_{v}\\}}\n\\,.\n\\end{equation}\n\\end{lemma}\nThis rank-one property obviously extends to possible spin network superposition states with correlation between bouquet spins, i.e. with coefficients $C(\\{J_{v}\\})$ generalizing the factorized ansatz $\\prod_{v}C_{v}(J_{v})$ of basis states, but is ruined as soon as there is non-trivial superpositions of the bulk components of the boundary intertwiners or more generally non-trivial intertwiner correlations between the bulk vertices.\nIndeed, let us consider a generic spin network state:\n\\begin{equation}\n\\psi=\\sum_{\\{j_{e}\\},\\{I_{v}\\}}\nC^{\\{j_{e}\\}_{e\\in\\partial\\Gamma},\\{j_{e}\\}_{e\\in\\Gamma^{o}}}_{\\{J_{v}\\}_{v\\in V^{\\partial}}}(\\{{}^{\\pp}I_{v}^{(J_{v})},{}^{o}I_{v}^{(J_{v})}\\}_{v\\in V^{\\partial}},\\{I_{w}\\}_{w\\notin V^{\\partial}})\n\\bigotimes_{v\\in V^{\\partial}}\\Big{(}{}^{\\pp}I_{v}^{(J_{v})}\\otimes{}^{o}I_{v}^{(J_{v})}\\Big{)}\n\\,\\otimes\\,\n\\bigotimes_{w\\notin V^{\\partial}} I_{w}\n\\quad\\in{\\mathcal H}_{\\Gamma}\\,,\n\\end{equation}\nwhere we use the notation $v$ for the boundary vertices and $w$ for the remaining vertices of the bulk graph. We have chosen an arbitrary orthonormal basis of intertwiners $I_{w}$ for bulk vertices while using explicitly the bouquet spin basis for the boundary vertices. A straightforward calculation yields the following induced boundary density matrix:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]=\n\\sum_{\\{J_{v}\\}_{v\\in V^{\\partial}}}\n\\bigotimes_{v\\in V^{\\partial}}\\frac{\\mathbb{I}_{J_{v}}}{(2J_{v}+1)} \\otimes\n\\rho_{\\{J_{v}\\}}\\,,\n\\end{equation}\n\\begin{equation}\n\\rho_{\\{J_{v}\\}}\n\\,=\\,\n\\sum_{{}^{\\pp}I_{v},\\widetilde{{}^{\\pp}I_{v}}}\\sum_{j_{e},{}^{o}I_{v},I_{w}}\nC^{\\{j_{e}\\}}_{\\{J_{v}\\}}(\\{\\widetilde{{}^{\\pp}I_{v}^{(J_{v})}},{}^{o}I_{v}^{(J_{v})}\\},\\{I_{w}\\})\n\\,\\overline{C^{\\{j_{e}\\}}_{\\{J_{v}\\}}(\\{{}^{\\pp}I_{v}^{(J_{v})},{}^{o}I_{v}^{(J_{v})}\\},\\{I_{w}\\})}\\,\n\\bigotimes_{v\\in V^{\\partial}}|\\widetilde{{}^{\\pp}I_{v}^{(J_{v})}}\\rangle\\langle {}^{\\pp}I_{v}^{(J_{v})}|\n\\,.\n\\end{equation}\nThe integration over the bulk holonomies amounts in the end to the partial trace over the bulk intertwiners (i.e. the intertwiner states at the vertices not connected to any boundary edge), over the bulk component of the intertwiners at the boundary vertices, and over the spins of the graph edges. This partial trace naturally leads to mixed states on the multiplicity spaces ${\\mathcal N}_{\\{J_{v}\\}}$ with higher rank multiplicity matrices $\\rho_{\\{J_{v}\\}}$.\nThis means that non-trivial bulk correlations (between bulk intertwiners and bulk spins) get reflected in the rank of the multiplicity matrices $\\rho_{\\{J_{v}\\}}$. This is a much finer witness of the bulk structure that the overall closure defect.\n\nThis hints towards a natural layer-by-layer reconstruction of the bulk from the boundary density matrix. Starting from $\\rho_{\\partial}$, one can try the various partitions of the boundary, grouping the boundary edges, and check which partition leads to a multiplicity matrix with the lowest rank, and thus with the least correlation between boundary vertices. Once this first layer of the bulk graph, one would thank follow the same logic to reconstruct the second layer of the bulk, grouping the bouquets together so that the second layer intertwiners are the least correlated possible. We would pursue this onion-like reconstruction until we reach the inner loop of the universal reconstruction procedure described in the previous section. It would be enlightening if one could translate this idea of a bulk with the least correlation between graph vertices into an action principle whose extrema would determine the bulk structure from the quantum boundary data fixed by the chosen boundary density matrix.\n\n\n\n\\section{Examples: Boundary Density Matrix for Candy Graphs}\n\nWe would like to conclude this paper with explicit examples of the bulk-to-boundary procedure, from bulk spin networks to boundary density matrices. We will consider the case of a bulk graph with two boundary vertices. The deeper bulk structure does not matter and it is enough to consider a single loop to which the bulk edges connect. We consider the two examples of boundary vertices each with two boundary edges, and then each with three boundary edges, as drawn on fig.\\ref{fig:candygraphs}.\n\\begin{figure}[hbt!]\n\\centering\n\\vspace*{3mm}\n\\begin{tikzpicture}[scale=0.7]\n\n\\draw [domain=0:360,dotted,thick] plot ({2.4 * cos(\\x)}, {1.2 * sin(\\x)});\n\n\\coordinate (A1) at (-2,0);\n\\coordinate (A2) at (2,0);\n\n\\draw (0,0) node {bulk};\n\n\\draw[thick] (A1) node[scale=0.7] {$\\bullet$} -- ++ (145:1.5);\n\\draw[thick] (A1) -- ++ (215:1.5);\n\n\\draw[thick] (A2) node[scale=0.7] {$\\bullet$} -- ++ (35:1.5);\n\\draw[thick] (A2) -- ++ (-35:1.5);\n\n\\draw [domain=0:360,dotted,thick] plot ({12+2.4 * cos(\\x)}, {1.2 * sin(\\x)});\n\n\\coordinate (B1) at (10,0);\n\\coordinate (B2) at (14,0);\n\n\\draw (12,0) node {bulk};\n\n\\draw[thick] (B1) node[scale=0.7] {$\\bullet$} -- ++ (145:1.5);\n\\draw[thick] (B1) -- ++ (180:1.5);\n\\draw[thick] (B1) -- ++ (215:1.5);\n\n\\draw[thick] (B2) node[scale=0.7] {$\\bullet$} -- ++ (35:1.5);\n\\draw[thick] (B2) -- ++ (0:1.5);\n\\draw[thick] (B2) -- ++ (-35:1.5);\n\n\\end{tikzpicture}\n\\caption\nCandy graphs.}\n\\label{fig:candygraphs}\n\\end{figure}\n\n\\subsection{The four-qubit candy graph}\n\nLet us describe the graph with two vertices linked by a single loop and each with two boundary edges, as drawn on fig.\\ref{fig:4candygraph}. We assume that the spin on the fours boundary edges are fixed to $j_{1}=j_{2}=j_{3}=j_{4}=\\f12$ and we also fix the spin around the loop to an arbitrary value $k$. \nThe bulk Hilbert space thus consists in the tensor product of the spaces of intertwiners living at the two vertices $\\alpha$ and $\\beta$:\n\\begin{equation}\n{\\mathcal H}_{bulk}= {\\mathcal H}_{\\alpha}\\otimes {\\mathcal H}_{\\beta}\\,,\n\\qquad\n {\\mathcal H}_{\\alpha}= {\\mathcal H}_{\\beta}\n=\n\\textrm{Inv}[{\\mathcal V}_{{\\f12}}\\otimes {\\mathcal V}_{{\\f12}}\\otimes {\\mathcal V}_{k}\\otimes {\\mathcal V}_{k}]\n\\,.\n\\end{equation}\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (A1) at (0,0);\n\\coordinate (A2) at (3,0);\n\n\\draw[thick,in=115,out=65,rotate=0] (A1) to node[above] {$k$} (A2) node[scale=0.7] {$\\bullet$} (A2) to[out=245,in=-65] node[below] {$k$} (A1) node[scale=0.7] {$\\bullet$};\n\n\\draw[thick] (A1) -- ++ (145:1.5) ++ (145:0.35) node {$\\f12$} ++ (15:0.5) node {$\\circled{1}$};\n\\draw[thick] (A1) -- ++ (215:1.5) ++ (215:0.35) node {$\\f12$} ++ (-15:0.5) node {$\\circled{2}$};\n\n\\draw[thick] (A2) -- ++ (35:1.5) ++ (35:0.35) node {$\\f12$} ++ (165:0.5) node {$\\circled{3}$};\n\\draw[thick] (A2) -- ++ (-35:1.5) ++ (-35:0.35) node {$\\f12$} ++ (-165:0.5) node {$\\circled{4}$};\n\n\\draw[->,>=stealth,very thick] (6,0) -- (7.5,0);\n\n\\coordinate (B1) at (11.5,0);\n\\coordinate (B2) at (13.5,0);\n\n\\draw[thick,in=105,out=75,rotate=0] (B1) to (B2) node[scale=0.7] {$\\bullet$} (B2) to[out=255,in=-75] (B1) node[scale=0.7] {$\\bullet$};\n\n\\draw[thick] (B1) ++ (-1.5,0) node[scale=0.7] {$\\bullet$} -- ++ (145:1.5) ++ (145:0.35);\n\\draw[thick] (B1) ++ (-1.5,0) -- ++ (215:1.5) ++ (215:0.35);\n\n\\draw[thick,red] (B1) node[black,scale=0.7] {$\\bullet$} -- node[red,above] {$J_{\\alpha}$} ++ (-1.5,0) node[black,scale=0.7] {$\\bullet$};\n\\draw[thick,red] (B2) node[black,scale=0.7] {$\\bullet$} -- node[red,above] {$J_{\\beta}$} ++ (1.5,0) node[black,scale=0.7] {$\\bullet$};\n\n\\draw[thick] (B2) ++ (1.5,0) node[scale=0.7] {$\\bullet$} -- ++ (35:1.5) ++ (35:0.35);\n\\draw[thick] (B2) ++ (1.5,0) node[scale=0.7] {$\\bullet$} -- ++ (-35:1.5) ++ (-35:0.35);\n\n\\end{tikzpicture}\n\\caption\n 4-qubit candy graph: spin and intertwiner decomposition.}\n\\label{fig:4candygraph}\n\\end{figure}\n\nFor each vertex, $v=\\alpha$ or $v=\\beta$, we recouple the two boundary spins, leading to the bouquet spin basis. Here the bouquet spin $J_{v}$ can take two values, 0 or 1, and entirely determines the intertwiner state:\n\\begin{equation}\n{\\mathcal H}_{v}=\\textrm{Inv}[{\\mathcal V}_{{\\f12}}\\otimes {\\mathcal V}_{{\\f12}}\\otimes {\\mathcal V}_{k}\\otimes {\\mathcal V}_{k}]=\n{\\mathbb C}|J=0\\rangle\\oplus {\\mathbb C}|J=1\\rangle\\,,\\quad\n\\dim{\\mathcal H}_{v}=2\n\\,,\n\\end{equation}\nso that bulk spin network basis states are labelled by the two bouquet spins\\footnotemark:\n\\begin{equation}\n{\\mathcal H}_{bulk}=\\bigoplus_{J_{\\alpha},J_{\\beta}\\in\\{0,1\\}}{\\mathbb C}|J_{\\alpha},J_{\\beta}\\rangle\n\\,,\\quad\n\\dim{\\mathcal H}_{bulk}=4\n\\,.\n\\end{equation}\n\\footnotetext{\nIt might seem awkward that the dimension of the bulk Hilbert space is here (much) smaller than the dimension of the boundary Hilbert space: it would be weird to talk about a bulk-to-boundary coarse-graining in that situation. This is due to the extremely simple structure of the bulk graph. In fact, the dimension of the bulk Hilbert space increases exponentially with the number of bulk vertices (actually, more precisely, the number of independent cycles in the bulk graph as shown in \\cite{Livine:2007sy}). For instance, merely pinching the loop to create an extra bulk vertices would increase the dimension of the bulk Hilbert space to $\\dim{\\mathcal H}_{bulk}=2\\times (2k+1)\\times2$, which would be larger from $\\dim{\\mathcal H}_{\\partial}=2^{4}$ as soon as the spin $j$ around the loop is larger than 2.\n}\nThe boundary Hilbert space consists in the tensor product of the four spin-$\\f12$ spaces, i.e. it is made of four qubits,\n\\begin{equation}\n{\\mathcal H}_{\\partial}=\\big{(}{\\mathcal V}_{\\f12}\\big{)}^{\\otimes 4}\n\\,,\\qquad\n\\dim{\\mathcal H}_{\\partial}=2^{4}\\,.\n\\end{equation}\n\nLet us consider an arbitrary spin network state,\n\\begin{equation}\n\\psi=\\sum_{J_{\\alpha},J_{\\beta}}C_{J_{\\alpha},J_{\\beta}}|J_{\\alpha},J_{\\beta}\\rangle\n\\in{\\mathcal H}_{bulk}\n\\,.\n\\end{equation}\nThe corresponding wave-function defines a boundary map, mapping the bulk holonomy along the two links of the inner loop to a boundary state in ${\\mathcal H}_{\\partial}$:\n\\begin{equation}\n|\\psi(g_{1},g_{2})\\rangle=\\sum_{a_{i},b_{i}}\n(-1)^{k-a_{1}}(-1)^{k-a_{2}}D^{k}_{a_{1}b_{1}}(g_{1})D^{k}_{a_{2}b_{2}}(g_{2})\n\\langle (k,-a_{1})(k,-a_{2})|J_{\\alpha}\\rangle\n\\langle (k,b_{1})(k,b_{2})|J_{\\beta}\\rangle\n\\,\\in{\\mathcal H}_{\\partial}\\,.\n\\end{equation}\nThe boundary density matrix is obtained by integrating over the bulk holonomy:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]=\\int \\mathrm{d} g_{1}\\mathrm{d} g_{2}\\,|\\psi(g_{1},g_{2})\\rangle\\langle \\psi(g_{1},g_{2})|\n\\,\\in\\textrm{End}[{\\mathcal H}_{\\partial}]\n\\,.\n\\end{equation}\nThe integration over the $\\mathrm{SU}(2)$ group elements is straightforward to compute and yields:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]=\n\\sum_{J_{\\alpha},J_{\\beta}}|C_{J_{\\alpha},J_{\\beta}}|^{2}\n\\,\\frac{\\mathbb{I}_{J_{\\alpha}}}{2J_{\\alpha}+1}\\otimes \\frac{\\mathbb{I}_{J_{\\beta}}}{2J_{\\beta}+1}\n\\,,\n\\end{equation}\nwhere $\\mathbb{I}_{J}$, for $J=0$ and $J=1$, is the projector on the subspace of total spin $J$ in the tensor product of two qubits $(V_{{\\f12}})^{\\otimes 2}$.\nThis confirms that a pure bulk spin network state leads naturally to a mixed boundary state. Moreover, due to the simple structure of the boundary in the present example, the induced boundary density matrix carries no entanglement between the pair of boundary edges attached to the vertex $\\alpha$ and the pair attached to the vertex $\\beta$.\n\n\\subsection{The six-qubit candy graph}\n\nWe can upgrade the previous example by enriching the structure of the boundary intertwiner thereby allowing for the possibility of non-trivial entanglement between the boundary edges attached to the two vertices.\nInstead of attaching two boundary edges to each vertex, we now connect three boundary edges to each vertex. We still fix the spins on the boundary edges to $j_{1}=..=j_{6}=\\f12$, as well as on the inner loop to $k$ and $k+\\f12$ (with the half-integer shift to account for the extra half-spin on the boundary) for $k>0$.\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (A1) at (0,0);\n\\coordinate (A2) at (3,0);\n\n\\draw[thick,in=115,out=65,rotate=0] (A1) to node[above] {$k$} (A2) node[scale=0.7] {$\\bullet$} (A2) to[out=245,in=-65] node[below] {$k+\\f12$} (A1) node[scale=0.7] {$\\bullet$};\n\n\\draw[thick] (A1) -- ++ (125:1.5) ++ (150:0.35) node {$\\f12$} ++ (25:0.5) node {$\\circled{1}$};\n\\draw[thick] (A1) --node[very near end,above] {$\\circled{2}$} ++ (180:1.5) ++ (180:0.35) node {$\\f12$};\n\\draw[thick] (A1) -- ++ (235:1.5) ++ (210:0.35) node {$\\f12$} ++ (-25:0.5) node {$\\circled{3}$};\n\n\\draw[thick] (A2) -- ++ (55:1.5) ++ (30:0.35) node {$\\f12$} ++ (155:0.5) node {$\\circled{4}$};\n\\draw[thick] (A2) --node[very near end,above] {$\\circled{5}$} ++ (0:1.5) ++ (0:0.35) node {$\\f12$};\n\\draw[thick] (A2) -- ++ (-55:1.5) ++ (-30:0.35) node {$\\f12$} ++ (-155:0.5) node {$\\circled{6}$};\n\n\\draw[->,>=stealth,very thick] (6,0) -- (7.5,0);\n\n\\coordinate (B1) at (12.5,0);\n\\coordinate (B2) at (14.5,0);\n\n\\draw[thick,in=105,out=75,rotate=0] (B1) to (B2) to[out=255,in=-75] (B1);\n\n\\draw[thick] (B1) ++ (-1.5,0) coordinate(B3);\n\\draw[thick] (B1) ++ (-1.5,0) -- ++ (215:1.5);\n\n\\draw[thick,blue] (B3) -- node[midway,above=1.5] {$\\iota_{\\alpha}$} ++ (145:1.5) ;\n\\draw[thick] (B3) ++ (145:1.5) -- ++ (215:1.5);\n\\draw[thick] (B3) ++ (145:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (145:1.5);\n\n\\draw[thick,red] (B1) node[black,scale=0.7] {$\\bullet$} -- node[red,below] {$J_{\\alpha}$} ++ (-1.5,0) node[scale=0.7,blue] {$\\bullet$} ;\n\\draw[thick,red] (B2) node[black,scale=0.7] {$\\bullet$} -- node[red,below] {$J_{\\beta}$} ++ (1.5,0) ;\n\n\\draw[thick] (B2) ++ (1.5,0) coordinate(B4);\n\\draw[thick] (B2) ++ (1.5,0) node[scale=0.7,blue] {$\\bullet$} -- ++ (-35:1.5);\n\n\\draw[thick,blue] (B4) -- node[midway,above=1.5] {$\\iota_{\\beta}$} ++ (35:1.5) node[scale=0.7] {$\\bullet$};\n\\draw[thick] (B4) ++ (35:1.5) -- ++ (35:1.5);\n\\draw[thick] (B4) ++ (35:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (-35:1.5);\n\n\\end{tikzpicture}\n\\caption\nThe one-loop 6-qubit candy graph and intertwiner basis.}\n\\label{fig:6candygraph}\n\\end{figure}\n\nThe bulk Hilbert space thus consists in the tensor product of the spaces of intertwiners living at the two vertices $\\alpha$ and $\\beta$:\n\\begin{equation}\n{\\mathcal H}_{bulk}= {\\mathcal H}_{\\alpha}\\otimes {\\mathcal H}_{\\beta}\\,,\\qquad\n{\\mathcal H}_{\\alpha}\n= {\\mathcal H}_{\\beta}\n=\n\\textrm{Inv}\\big{[}({\\mathcal V}_{{\\f12}})^{\\otimes 3}\\otimes {\\mathcal V}_{k}\\otimes {\\mathcal V}_{k+\\f12}\\big{]}\n\\,.\n\\end{equation}\nFor each vertex, $v=\\alpha$ and $v=\\beta$, we unfold the intertwiner space by recoupling the three spins $\\f12$ together into the bouquet spin $J_{v}$, as drawn on fig.\\ref{fig:6candygraph}.\nSince the 3-valent intertwiner between the spins $k$, $k+\\f12$ and $\\f12$ is unique (and given by the corresponding Clebsh-Gordan coefficients), we can put aside this bulk component of the intertwiner and focus on the boundary component of the intertwiner.\nThen, since the tensor product of three spins $\\f12$ decomposes as\n\\begin{equation}\n({\\mathcal V}_{{\\f12}})^{\\otimes 3}={\\mathcal V}_{\\f32}\\otimes 2 {\\mathcal V}_{\\f12}\\,,\n\\end{equation}\nthe intertwiner space is three-dimensional:\n\\begin{equation}\n{\\mathcal H}_{v}\n=\n{\\mathbb C}|J_{v}=\\f32\\rangle \\oplus {\\mathbb C}|J_{v}=\\f12,\\iota_{v}=0\\rangle\\oplus {\\mathbb C}|J_{v}=\\f12,\\iota_{v}=1\\rangle\n\\,.\n\\end{equation}\nThe extra index $\\iota\\in\\{0,1\\}$ when the three qubits recouple to the bouquet spin $J=\\f12$ label the degeneracy in the decomposition of the tensor product. As depicted on fig.\\ref{fig:6candygraph}, we can simply take it as the spin recoupling for the first two qubits (boundary edges 1 and 2 for the vertex $\\alpha$ and boundary edges 4 and 5 for the vertex $\\beta$). In that case, we can extend the convention for the intertwiner basis state $|J,\\iota\\rangle$ even to $J=\\f32$, in which case the extra label is allowed to take a single value $\\iota=1$.\n\nBulk spin network basis states are then defined by the choice of the two intertwiner basis states at $v=\\alpha$ and $v=\\beta$:\n\\begin{equation}\n{\\mathcal H}_{bulk}\n=\n\\bigoplus_{\\{J_{v},\\iota_{v}\\}}{\\mathbb C}|J_{v},\\iota_{v}\\rangle\n\\,,\\quad\n\\dim {\\mathcal H}_{bulk}=3\\times 3=9\\,.\n\\end{equation}\nThe boundary Hilbert space simply consists in 6 qubits, from which we also use the bouquet spin basis:\n\\begin{equation}\n{\\mathcal H}_{\\partial}\n=\n\\big{(}{\\mathcal V}_{\\f12}\\big{)}^{\\otimes 6}\n=\n{\\mathcal H}^{\\partial}_{\\alpha}\\otimes {\\mathcal H}^{\\partial}_{\\beta}\n\\,,\n\\quad\n{\\mathcal H}^{\\partial}_{\\alpha}= {\\mathcal H}^{\\partial}_{\\beta}=\n\\big{(}{\\mathcal V}_{\\f12}\\big{)}^{\\otimes 3}\n=\n\\bigoplus_{J=\\f12,\\f32} {\\mathcal V}_{J}\\otimes{\\mathcal N}_{J}\\,,\n\\quad\\textrm{with}\\quad\n{\\mathcal N}_{J}=\n\\textrm{Inv}\\Big{[}\n{\\mathcal V}_{J}\\otimes\\big{(}{\\mathcal V}_{\\f12}\\big{)}^{\\otimes 3}\n\\Big{]}\\,,\n\\end{equation}\n\\begin{equation}\n\\textrm{where}\\quad\n\\dim{\\mathcal N}_{\\f12}=2\n\\,,\\quad\n\\dim{\\mathcal N}_{\\f32}=1\n\\,,\\quad\n\\dim{\\mathcal H}^{\\partial}_{\\alpha}=\\dim{\\mathcal H}^{\\partial}_{\\beta}=\n2\\times2+4\\times 1 =2^{3}\\,.\n\\end{equation}\nLet us consider a general spin network state (with fixed spins as we have assumed so far) on this candy graph with six boundary edges:\n\\begin{equation}\n\\psi=\\sum_{\\{J_{v},\\iota_{v}\\}_{v=\\alpha,\\beta}}\nC^{J_{\\alpha},J_{\\beta}}_{\\iota_{\\alpha},\\iota_{\\beta}}\\,|(J_{\\alpha},\\iota_{\\alpha})\\,(J_{\\beta},\\iota_{\\beta})\\rangle\\,.\n\\end{equation}\nThe induced boundary density matrix, obtained after integration over the bulk holonomies, is:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]\n=\n\\sum_{J_{\\alpha},J_{\\beta}}\n\\,\\frac{\\mathbb{I}_{J_{\\alpha}}}{2J_{\\alpha}+1}\\otimes \\frac{\\mathbb{I}_{J_{\\beta}}}{2J_{\\beta}+1}\n\\otimes\n\\rho_{J_{\\alpha},J_{\\beta}}\n\\,,\n\\end{equation}\nwhere the multiplicity matrix encodes the data about the intertwiners:\n\\begin{equation} \n\\rho_{J_{\\alpha},J_{\\beta}}\n\\,=\\,\n\\sum_{\\{\\iota_{v},\\tilde{\\iota}_{v}\\}}\nC^{J_{\\alpha},J_{\\beta}}_{\\tilde{\\iota}_{\\alpha},\\tilde{\\iota}_{\\beta}}\n\\overline{C^{J_{\\alpha},J_{\\beta}}_{\\iota_{\\alpha},\\iota_{\\beta}}}\n\\Big{|}(J_{\\alpha},\\tilde{\\iota}_{\\alpha})(J_{\\beta},\\tilde{\\iota}_{\\beta})\\Big{\\rangle}\\Big{\\langle}(J_{\\alpha},\\iota_{\\alpha})(J_{\\beta},\\iota_{\\beta})\\Big{|}\n\\quad\\in\\,\\textrm{End}\\big{[}{\\mathcal N}_{J_{\\alpha}}\\otimes {\\mathcal N}_{J_{\\beta}}\\big{]}\n\\,.\n\\end{equation}\nThis is always a rank-one matrix and does not lead to entanglement between the boundary edges (1,2,3) and (4,5,6).\n\n\\medskip\n\nIf we want to obtain non-trivial multiplicity matrices, i.e. of higher rank, one has to allow for non-trivial bulk components of the intertwiners. To this purpose, we must consider a (slightly) more complicated bulk graph with three bulk edges connecting the two vertices. We can assume that the spins on all the edges, both on the boundary and in the bulk, are fixed to, say, $j_{1}=..=j_{9}=\\f12$. If we look at the vertex $v$, which can be $\\alpha$ or $\\beta$, the 6-valent intertwiner can be unfolded into the bouquet spin basis. As depicted on fig.\\ref{fig:6candygraph3link}, an intertwiner basis state is now labeled by the bouquet spin $J_{v}$, a multiplicity index $\\iota^{\\partial}_{v}\\in\\{0,1\\}$ for the boundary component of the intertwiner (which can be taken as the recoupled spin of the edges 1 and 2) and a multiplicity index $\\iota^{o}_{v}\\in\\{0,1\\}$ for the bulk component of the intertwiner (which can be taken as the recoupled spin of the edges 4 and 5).\n\\begin{figure}[hbt!]\n\\centering\n\\begin{tikzpicture}[scale=0.7]\n\n\\coordinate (A1) at (0,0);\n\\coordinate (A2) at (3,0);\n\n\\draw[thick,in=105,out=75,rotate=0] (A1) to node[above] {$\\f12$} (A2) node[scale=0.7] {$\\bullet$} (A2) to[out=255,in=-75] node[above] {$\\f12$} (A1) node[scale=0.7] {$\\bullet$};\n\\draw[thick] (A1) --node[above] {$\\f12$} (A2);\n\n\\draw[thick] (A1) -- ++ (125:1.5) ++ (120:0.35) node {$\\f12$};\n\\draw[thick] (A1) -- ++ (180:1.5) ++ (180:0.35) node {$\\f12$};\n\\draw[thick] (A1) -- ++ (235:1.5) ++ (235:0.35) node {$\\f12$};\n\n\\draw[thick] (A2) -- ++ (55:1.5) ++ (55:0.35) node {$\\f12$};\n\\draw[thick] (A2) --++ (0:1.5) ++ (0:0.35) node {$\\f12$};\n\\draw[thick] (A2) -- ++ (-55:1.5) ++ (-55:0.35) node {$\\f12$};\n\n\\draw[->,>=stealth,very thick] (6,0) -- (7.5,0);\n\n\\coordinate (B1) at (12.5,0);\n\\coordinate (B2) at (15,0);\n\n\\draw[thick,blue] (B1) -- node[midway,left]{$\\iota_{\\alpha}^{o}$} ++(70:1) coordinate (C);\n\\draw[thick,blue] (B2) -- node[midway,right]{$\\iota_{\\beta}^{o}$} ++(110:1) coordinate (D);\n\n\\draw[thick] (B2) to[out=255,in=-75] node[below=7] {bulk} (B1);\n\n\\draw[thick] (C) node[blue,scale=0.7] {$\\bullet$} to[out=255,in=-75] (D) node[blue,scale=0.7] {$\\bullet$} to[out=105,in=75] (C);\n\n\\draw[thick] (B1) ++ (-1.5,0) coordinate(B3);\n\\draw[thick] (B1) ++ (-1.5,0) -- ++ (215:1.5)++(-1,-0.75) node {boundary};\n\n\\draw[thick,blue] (B3) -- node[midway,above=1.5] {$\\iota_{\\alpha}^{\\partial}$} ++ (145:1.5) node[scale=0.7] {$\\bullet$};\n\\draw[thick] (B3) ++ (145:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (215:1.5);\n\\draw[thick] (B3) ++ (145:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (145:1.5);\n\n\\draw[thick,red] (B1) node[blue,scale=0.7] {$\\bullet$} -- node[red,below] {$J_{\\alpha}$} ++ (-1.5,0) node[scale=0.7,blue] {$\\bullet$} ;\n\\draw[thick,red] (B2) node[blue,scale=0.7] {$\\bullet$} -- node[red,below] {$J_{\\beta}$} ++ (1.5,0);\n\n\\draw[thick] (B2) ++ (1.5,0) coordinate(B4);\n\\draw[thick] (B2) ++ (1.5,0) node[blue,scale=0.7] {$\\bullet$} -- ++ (-35:1.5) ++(1,-0.75) node {boundary};\n\n\\draw[thick,blue] (B4) -- node[midway,above=1.5] {$\\iota_{\\beta}^{\\partial}$} ++ (35:1.5);\n\\draw[thick] (B4) ++ (35:1.5) -- ++ (35:1.5);\n\\draw[thick] (B4) ++ (35:1.5) node[blue,scale=0.7] {$\\bullet$} -- ++ (-35:1.5);\n\n\\end{tikzpicture}\n\\caption\nThe triple link 6-qubit candy graph and intertwiner basis.}\n\\label{fig:6candygraph3link}\n\\end{figure}\n\nThe main consequence of adding bulk structure is to increase the dimension of the bulk Hilbert space:\n\\begin{equation}\n{\\mathcal H}_{bulk}={\\mathcal H}_{\\alpha}\\otimes{\\mathcal H}_{\\beta}\\,,\n\\quad\n{\\mathcal H}_{v}\n=\n\\textrm{Inv}\\big{[}({\\mathcal V}_{\\f12})^{\\otimes 6}\\big{]}\n=\n\\bigoplus_{J_{v},\\iota_{v}^{\\partial},\\iota_{v}^{o}}{\\mathbb C}|J_{v},\\iota_{v}^{\\partial},\\iota_{v}^{o}\\rangle\n\\,\\quad\n\\dim{\\mathcal H}_{bulk}=(1+2\\times 2)^{2}=25\n\\,.\n\\end{equation}\nOn the other hand, the boundary Hilbert space is left unchanged. This much higher dimensionality of the bulk Hilbert space allows for finer structure of the bulk state and induced entangled on the boundary. Indeed, a generic spin network state decomposes as:\n\\begin{equation}\n\\psi=\n\\sum_{\\{J_{v},\\iota_{v}\\}_{v=\\alpha,\\beta}}\nC^{J_{\\alpha},J_{\\beta}}_{\\iota^{\\partial}_{\\alpha},\\iota_{\\alpha}^{o},\\iota^{\\partial}_{\\beta},\\iota_{\\beta}^{o}}\\,\n\\Big{|}(J_{\\alpha},\\iota^{\\partial}_{\\alpha},\\iota_{\\alpha}^{o})\\,(J_{\\beta},\\iota^{\\partial}_{\\beta},\\iota_{\\beta}^{o})\\Big{\\rangle}\n\\,.\n\\end{equation}\nCompared to the previous case of the one-loop candy graph, the bulk part of the intertwiners $\\iota^{o}_{v}$ is not seen by the boundary state. This bulk data ``hidden'' from the boundary creates entanglement between the two bouquets of boundary edges. Indeed the induced boundary density matrix can be computed as:\n\\begin{equation}\n\\rho_{\\partial}[\\psi]\n=\n\\sum_{J_{\\alpha},J_{\\beta}}\n\\,\\frac{\\mathbb{I}_{J_{\\alpha}}}{2J_{\\alpha}+1}\\otimes \\frac{\\mathbb{I}_{J_{\\beta}}}{2J_{\\beta}+1}\n\\otimes\n\\rho_{J_{\\alpha},J_{\\beta}}\n\\,,\n\\end{equation}\nwhere the multiplicity matrix encodes the data about the intertwiners:\n\\begin{equation} \n\\rho_{J_{\\alpha},J_{\\beta}}\n\\,=\\,\n\\sum_{\\{\\iota_{v}^{\\partial},\\tilde{\\iota}_{v}^{\\partial}\\}}\n\\left(\n\\sum_{\\{\\iota_{v}^{o}\\}}\nC^{J_{\\alpha},J_{\\beta}}_{\\tilde{\\iota}^{\\partial}_{\\alpha},\\iota_{\\alpha}^{o},\\tilde{\\iota}^{\\partial}_{\\beta},\\iota_{\\beta}^{o}}\n\\overline{C^{J_{\\alpha},J_{\\beta}}_{\\iota^{\\partial}_{\\alpha},\\iota_{\\alpha}^{o},\\iota^{\\partial}_{\\beta},\\iota_{\\beta}^{o}}}\n\\right)\\,\n\\Big{|}(J_{\\alpha},\\tilde{\\iota}_{\\alpha})(J_{\\beta},\\tilde{\\iota}_{\\beta})\\Big{\\rangle}\\Big{\\langle}(J_{\\alpha},\\iota_{\\alpha})(J_{\\beta},\\iota_{\\beta})\\Big{|}\n\\quad\\in\\,\\textrm{End}\\big{[}{\\mathcal N}_{J_{\\alpha}}\\otimes {\\mathcal N}_{J_{\\beta}}\\big{]}\n\\,.\n\\end{equation}\nThe partial trace over the bulk components of the intertwiners lead to a higher rank of the multiplicity matrix, reflecting the induced entanglement between the boundary edges attached to $\\alpha$ and the ones attached to the vertex $\\beta$.\nA simple example is, choosing that both intertwiners have support exclusively on the bouquet spins $J_{\\alpha}=J_{\\beta}=\\f12$, and form a Bell-like state:\n\\begin{equation}\n\\psi_{Bell}=\n\\f1{\\sqrt{2}}\\,\\big{(}|(\\f12,0,0)(\\f12,1,1)\\rangle-|(\\f12,1,1)(\\f12,0,0)\\rangle\\big{)}\\,,\n\\end{equation}\nleading to the induced density matrix:\n\\begin{equation}\n\\rho_{\\partial}[\\psi_{Bell}]\n=\n\\frac{\\mathbb{I}_{\\f12}}{2}\\otimes \\frac{\\mathbb{I}_{\\f12}}{2}\\otimes \\rho_{{\\mathcal N}}\\,,\\quad\n\\rho_{{\\mathcal N}}\n=\n|(\\f12,0)(\\f12,1)\\rangle\\langle(\\f12,0)(\\f12,1)|+|(\\f12,1)(\\f12,0)\\rangle\\langle(\\f12,1)(\\f12,0)|\\,,\n\\end{equation}\nwhere the multiplicity matrix now has rank two.\nThis perfectly illustrates how tracing out the bulk degrees of freedom leads to a mixed state on the boundary, or in more physical terms, how correlations between bulk intertwiners leads to entanglements between boundary edges.\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Conclusion \\& Outlook}\n\n\nIn the context of the quest for understanding the holographic nature of the gravitational interaction and of quantum gravity, it is essential to investigate the bulk-boundary relation and interplay. This goes both ways: on the one hand, we need to understand the boundary modes and dynamics induced by the bulk degrees of freedom, and on the other hand, we need to understand how boundary conditions propagate within and throughout the bulk at both classical and quantum levels. Such holographic mapping between bulk and boundary theories needs to be achieved at multiple levels: the symmetry groups, the dynamics, the quantum states, the algebra of observables.\n\nHere, in order to start analyzing the potential holographic behavior of loop quantum gravity, we introduced explicit 2d boundaries to the 3d space, i.e. space-time corners. This 2d boundary admits a Hilbert space of boundary states, understood as quantum boundary conditions. Then loop quantum gravity's spin network states for the bulk geometry become what we call {\\it boundary maps}, that is wave-functions still depending on bulk fields or degrees of freedom but valued in the boundary Hilbert space (instead of ${\\mathbb C}$ for standard quantum mechanics). In some sense, bulk wave-functions can be interpreted as quantum circuits acting on the boundary states.\nFor spin network states, the bulk degrees of freedom are the $\\mathrm{SU}(2)$ holonomies of the Ashtekar-Barbero connection along the graph links, while the boundary states are the spin states living on the spin network open edges puncturing the 2d boundary surface.\nAs expected, the squared norm of the bulk wave-function using the scalar product of the boundary Hilbert space gives the probability distribution for the bulk holonomies.\nThe new feature is that one can trace over the bulk by integrating over the bulk holonomies and obtain a density matrix for the boundary states. This {\\it boundary density matrix} encodes all that we can know about the quantum state of geometry from probing the boundary if we do not have access to any bulk observable. For a pure bulk state, we typically obtain a mixed boundary state. This realizes a bulk-to-boundary coarse-graining.\n\nOur main result is the proof that any gauge-covariant boundary density matrix for an arbitrary number of boundary edges can be induced from a pure spin network state on a simple bulk graph consisting from a single vertex connecting all the boundary edges to a single bulk loop. In quantum information jargon, this universal reconstruction process actually purifies arbitrary mixed boundary states into pure bulk states.\n\n\\medskip\n\nWe further analyzed the algebraic structure of induced boundary density matrices, more precisely how intertwiner correlations, i.e. entanglement between bulk volume excitations, get reflected by the boundary density matrix.\nThis should be considered as part of the larger program of bulk tomography through boundary observables in loop quantum gravity. Hopefully, the basic tools introduced here should allow a more systematic study of how far one can see into the bulk and how much one observer on the boundary can know about the bulk spin network graph.\nFor instance, we would like to study in more details the relation between between boundary edge entanglement and bulk intertwiner entanglement and quantify in a precise and explicit manner their difference.\n\nThese questions are at the kinematical level. Our hope is more ambitious and we would like to tackle the spin network dynamics and reformulate it in light of the bulk-boundary relation. This means projecting the bulk dynamics onto the boundary and write it in terms of boundary evolution operators. Loop quantum gravity's dynamics would then read in terms of completely positive maps\\footnotemark{} acting on the boundary density matrices.\n\\footnotetext{\nMathematically, any evolution or measurement can be written as a completely positive map (CP map) \\cite{Choi:1975,Wilde:2011npi}, which admits an operator-sum representation in terms of Kraus operators $\\{ E_k, \\, k=1,2,\\cdots \\}$:\n\\begin{equation} \\label{eq:KrausOperators}\n{\\mathcal E}(\\rho)\n=\n\\sum_{k} E_k \\, \\rho \\, E_k^{\\dagger}\n\\,, \\qquad\n\\sum_{k} E_k^{\\dagger} \\, E_k \\leq \\mathbb{I}\n\\,.\n\\nonumber\n\\end{equation}\nThe case when $\\sum_{k} E_k^{\\dagger} \\, E_k = \\mathbb{I}$ are completely positive trace preserving map (CPTP map), which leave invariant the trace of quantum states. We wish to describe boundary evolution and measurements in loop quantum gravity in terms of CPTP maps.}\nThrough this, the goal is to investigate in depth the implementation of the holographic principle in loop quantum gravity, and parallely move forward in the study of the coarse-graining of the theory and its renormalization flow from the Planck scale to ours. For these purposes, a general formulation in terms of boundary density matrices seems better suited to the analysis of the dynamics, measurements and coarse-graining than pure spin network states.\n\n\n\\section*{Acknowledgement}\nQ.C. is financially supported by the China Scholarship Council.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzqej b/data_all_eng_slimpj/shuffled/split2/finalzqej new file mode 100644 index 0000000000000000000000000000000000000000..c7eebd867e5a2d721d8a4fb18deb52e2d5d7a7be --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzqej @@ -0,0 +1,5 @@ +{"text":"\\section{INTRODUCTION}\nEntanglement is a defining property of quantum theory, and plays a crucial role in a broad range of problems in physics, ranging from the black hole information paradox~\\cite{page1993information} to the characterization of phases in condensed matter systems~\\cite{eisert2010colloquium}. Put simply, entanglement refers to quantum correlations between different parts of a physical system that cannot be explained classically~\\cite{bell1964einstein, horodeckiRMP2009}. Over the years, a wide range of \\emph{entanglement measures} have been devised to quantify entanglement~\\cite{pleniomeasures2007}. Prominent among those are the \\emph{bipartite} entanglement measures, which involve splitting the system in two parts.\n\nFor the special case of globally pure quantum states $\\ket{\\psi}$ (our interest here) and a bipartition, the von Neumann entanglement entropy, also known as the entropy of entanglement or just the \\emph{entanglement entropy}, is one of the simplest measures of quantum entanglement. It vanishes if and only if there is no quantum entanglement between the two parts, in which case the state must be a product state. We study the entanglement entropy in Hilbert spaces with a tensor product structure $\\mathcal{H}=\\mathcal{H}_A\\otimes\\mathcal{H}_B$\\footnote{For fermionic systems, as considered later, one needs to work with a fermionic generalization of the tensor product, which also gives rise to a fermionic notion of the partial trace~\\cite{szalay2021fermionic}.}. To compute the entanglement entropy of subsystem $A$ (with volume $V_A$) of $\\ket{\\psi}$, one traces out the complement subsystem $B$ (with volume $V-V_A$, where $V$ is the total volume) to obtain the mixed density matrix $\\hat \\rho_A=\\mathrm{Tr}_{\\mathcal{H}_B}\\ket{\\psi}\\bra{\\psi}$. The entanglement entropy $S_A$ of subsystem $A$ is then\n\\begin{equation}\\label{Neumann.entropy}\n S_A=-\\mathrm{Tr}(\\hat \\rho_A\\ln\\hat \\rho_A),\n\\end{equation}\nwhile the $n$th R\\'enyi entropy is defined as\n\\begin{equation}\n S_A^{(n)} = - \\ln[\\mathrm{Tr}(\\hat \\rho_A^n)]\\,.\n\\end{equation}\nThe second-order R\u00e9nyi entropy $S_A^{(2)}$ has already been measured in experiments with ultracold atoms in optical lattices~\\cite{islam2015measuring, kaufman2016quantum}.\n\nWe stress that the focus of this tutorial is in pure quantum states. Quantifying entanglement in globally mixed states is more challenging. In particular, the von Neumann and R\\'enyi entanglement entropies are not entanglement measures for globally mixed states. Several of the bipartite entanglement measures defined for mixed states ({\\it e.g.},\\ distillable entanglement, entanglement cost, entanglement of formation, relative entropy of entanglement, and squashed entanglement) reduce to the entanglement entropy when evaluated on pure states~\\cite{pleniomeasures2007}.\n\t\n\\subsection{Ground-state entanglement}\n\t\nIn general one is interested in understanding the behavior of measures of entanglement in physical systems, and in determining what such a behavior can tell us about the physical properties of the system. Much progress in this direction has been achieved in the context of many-body ground states of local Hamiltonians, for which a wide range of theoretical approaches are available~\\cite{amico_fazio_08, Peschel2009, calabrese_cardy_09, eisert2010colloquium}. Such ground states usually exhibit a leading term of the entanglement entropy that scales with the area, or with the logarithm of the volume, of the subsystem. Identifying and understanding universal properties of the entanglement entropy in ground states of local Hamiltonians has been a central goal~\\cite{audenaert_eisert_02, osterloh_amico_2002, osborne_nielsen_02, vidal_latorre_03}. \n\t\nIn one-dimensional systems of spinless fermions or $\\tfrac{1}{2}$ spins, the leading (in the volume $V_A$) term in the entanglement entropy has been found to distinguish ground states of critical systems from those of noncritical ones~\\cite{vidal_latorre_03, latorre_rico_04, hastings_07}. In the former the leading term exhibits a logarithmic scaling with the volume (when described by conformal field theory, the central charge is the prefactor of the logarithm~\\cite{vidal_latorre_03, latorre_rico_04, calabrese_cardy_04}), while in noncritical ground states the leading term is a constant (which, in one dimension, reflects an area-law scaling). Subleading terms have also been studied, specially in the context of states that are physically distinct but exhibit the same leading entanglement entropy scaling. An example in the context of quadratic Hamiltonians in two dimensions are ground states that are critical with a pointlike Fermi surface versus noncritical, which both exhibit a leading area-law entanglement entropy~\\cite{wolf_06, gioev_klich_06, barthel_chung_06, li_ding_06, cramer_eisert_07}. Remarkably, the subleading term in the former scales logarithmically with $V_A$ while it is constant for noncritical ground states~\\cite{ding_brayali_08}. Also, in two-dimensional systems, critical states described by conformal field theory~\\cite{fradkin_moore_06} and states with a spontaneously broken continuous symmetry~\\cite{kallin_hastings_11, metlitski_grover_15} have been found to exhibit a universal subleading logarithmic term.\n\t\n\\subsection{Excited-state entanglement}\n\t\nIn recent years, interest in understanding the far-from-equilibrium dynamics of (nearly) isolated quantum systems and the description of observables after equilibration~\\cite{polkovnikov2011colloquium, d2016quantum, gogolin2016equilibration} have motivated many studies of the entanglement properties of highly excited eigenstates of quantum many-body systems (mostly in the context of lattice systems)~\\cite{mejia_05, alba09, Deutsch_2010, santos_12, deutsch_li_13, storms_singh_14, moelter_barthel_14, lai_yang_15, beugeling_andreanov_15, yang_chamon_15, nandy_sen_16, vidmar2017entanglement, vidmar2017entanglement2, zhang_vidmar_18, dymarsky2018subsystem, garrisson_grover_18, nakagawa_watanabe_18, vidmar2018volume, huang_19, hackl2019average, lu_grover_19, murthy_19, jafarizadeh_rajabpour_19, wilming_goihl_19, leblond_mallayya_19, faiez_20a, modak_nag_20, kaneko_iyoda_20, bhakuni_sharma_20, faiez_20b, lydzba2020entanglement, lydzba2021entanglement, haque_mcclarty_20, miao_barthel_20}. Because of the limited suit of tools available to study entanglement properties of highly excited eigenstates of model Hamiltonians, most of the results reported in those works were obtained using exact diagonalization techniques, which are limited to relatively small system sizes.\n\t\nIn contrast to the ground states, typical highly excited many-body eigenstates of local Hamiltonians have a leading term of the entanglement entropy that scales with the volume of the subsystem. Also, in contrast to the ground states, the leading volume-law term exhibits a fundamentally different behavior depending on whether the Hamiltonian is nonintegrable (the generic case for physical Hamiltonians) or integrable. In the former case the coefficient has been found to be constant, while in the latter case it depends on the ratio between the volume of the subsystem and the volume of the entire system.\n\t\nMany-body systems that are integrable are special as they have an extensive number of local conserved quantities~\\cite{sutherland_book_04}. As a result, their equilibrium properties can in many instances be calculated analytically, and their near-equilibrium properties can be ``anomalous,'' e.g., they can exhibit transport without dissipation (ballistic transport). Also, isolated integrable systems fail to thermalize if taken far from equilibrium. Interested readers can learn about the effects of quantum integrability in the collection of reviews in Ref.~\\cite{calabrese_essler_review_16}. \n\t\nThere is a wide range of quadratic Hamiltonians in arbitrary dimensions (which include a wide range of noninteracting models), e.g., translationally invariant quadratic Hamiltonians, that can be seen as a special class of integrable models. A class in which the nondegenerate many-body eigenstates are Gaussian states, while their degenerate eigenstates can always be written as Gaussian states. This means that those many-body eigenstates are fully characterized by their one-body density matrix or their covariance matrix. The entanglement entropy of highly excited eigenstates of some of those ``integrable'' quadratic Hamiltonians was studied in Refs.~\\cite{storms_singh_14, vidmar2017entanglement, zhang_vidmar_18, hackl2019average, jafarizadeh_rajabpour_19}. Other quadratic Hamiltonians in arbitrary dimensions that will be of interest to us here are quadratic Hamiltonians in which the single-particle sector exhibits quantum chaos (to be defined in the next subsections). We refer to such Hamiltonians as quantum-chaotic quadratic Hamiltonians. The entanglement entropy of highly excited eigenstates of quantum-chaotic quadratic Hamiltonians (on a lattice) was studied in Refs.~\\cite{lydzba2020entanglement, lydzba2021entanglement}. It was found to exhibit a typical leading volume-law term that is qualitatively similar to that found in eigenstates of integrable quadratic Hamiltonians (in which the single-particle sector does not display quantum chaos), such as translationally invariant quadratic Hamiltonians (on a lattice)~\\cite{vidmar2017entanglement, hackl2019average}. \n\t\nIn the presence of interactions, many-body integrable systems mostly exist in one dimension~\\cite{cazalilla_citro_review_11, guan2013fermi}. They come in two ``flavors,'' Hamiltonians that can be mapped onto noninteracting ones (a smaller class), and Hamiltonians that cannot be mapped onto noninteracting ones. Remarkably, both ``flavors'' have been found to describe pioneering experiments with ultracold quantum gases in one dimension~\\cite{moritz_stoferle_03, kinoshita_wenger_04, paredes_widera_04, kinoshita_wenger_05, kinoshita_wenger_06, amerongen_es_08, gring_kuhnert_12, fukuhara2013microscopic, pagano2014one, langen_erne_15, Bloch2016, tang_kao_18, schemmer2019generalized, wilson_malvania_20, jepsen2020spin, lev2020, malvania_zhang_21}. The entanglement entropy of highly excited eigenstates of lattice Hamiltonians that can be mapped onto noninteracting ones (which exhibit the same leading volume-law terms as their noninteracting counterparts) was studied in Refs.~\\cite{vidmar2018volume, hackl2019average}, while the entanglement entropy of highly excited eigenstates of a Hamiltonian (the spin-$\\frac{1}{2}$ XXZ chain) that cannot be mapped onto a noninteracting one was studied in Ref.~\\cite{leblond_mallayya_19}. Remarkably, in all the quadratic and integrable systems studied so far, the coefficient of the leading volume-law term of typical eigenstates has been found to depend on the ratio between the volume of the subsystem and the volume of the entire system.\n\t\nAnalytical progress understanding the previously mentioned numerical results has been achieved in some special cases. One such case is translationally invariant quadratic Hamiltonians, or models that can be mapped onto them in one dimension~\\cite{cazalilla_citro_review_11}, for which tight bounds were obtained for the leading (volume-law) term in the average entanglement entropy~\\cite{vidmar2017entanglement, hackl2019average}, and some understanding was gained about subleading corrections~\\cite{vidmar2018volume}. This was possible thanks to the Gaussian nature of the eigenstates. Another case is nonintegrable models under the assumption that their eigenstates exhibit eigenstate thermalization~\\cite{Deutsch_2010, dymarsky2018subsystem, garrisson_grover_18, murthy_19}.\n\t\n\\subsection{Random matrix theory in physics}\n\t\nRandom matrix theory has provided a more systematic approach to gaining an analytical understanding of the entanglement properties of many-body eigenstates in nonintegrable models~\\cite{yang_chamon_15, vidmar2017entanglement2, liu_chen_18, huang_gu_19, pengfei_chunxiao_20, morampudi_chandran_20, haque_mcclarty_20}. Such an approach is justified by the fact that many studies (see, {\\it e.g.},\\ Ref.~\\cite{d2016quantum} for a review) have shown that nonintegrable models exhibit ``quantum chaos.'' By quantum chaos what is meant is that statistical properties of highly excited eigenstates of such models, {\\it e.g.},\\ level spacing distributions, are described by the Wigner surmise~\\cite{d2016quantum}. This was conjectured by Bohigas, Giannoni, and Schmit (BGS)~\\cite{bohigas_giannoni_84} for quantum systems with a classical counterpart, in which case ``quantum chaos'' usually occurs when the classical counterparts are $K$-chaotic, where $K$ stands for Kolmogorov, and it is the class of systems that exhibit the highest degree of chaos. Remarkably, even statistical properties of eigenvectors such as the ratio between the variance of the diagonal and the off-diagonal matrix elements of Hermitian operators have been shown to agree with random matrix theory predictions~\\cite{mondaini_rigol_17, jansen_stolpp_19, richter_dymarsky_20, schoenle_jansen_21}. Recently, two of us (M.R. and L.V., in collaboration with P. \\L yd\\.{z}ba) used random matrix theory in the context of quantum-chaotic quadratic Hamiltonians to obtain a closed-form expression that describes the average entanglement entropy of highly excited eigenstates of quadratic models whose single-particle spectrum exhibits quantum chaos, such as the three-dimensional Anderson model~\\cite{lydzba2020entanglement, lydzba2021entanglement}.\n\t\nThe application of random matrix theory to many-body systems goes back to works by Wigner~\\cite{wigner_55, wigner_57, Wigner-surmise, wigner_58} as well as Landau and Smorodmsky~\\cite{landau1955} in the 1950s, who aimed at finding a statistical theory that described the excitation spectra in nuclei for elastic scattering processes. Their novel idea was that a sufficiently complicated operator like the Hamilton, or the lattice Dirac operator, can be replaced by a random matrix (whose entries are, preferably, Gaussian distributed as those are easier to deal with analytically) with the appropriate symmetries. For this to hold, it is not important that the physical operator has matrix entries that are all occupied with nonzero entries. In condensed matter models~\\cite{d2016quantum}, as well as in lattice QCD~\\cite{Berbenni-Bitsch-1998, damgaard-2000, Farchioni-2000, Deuzeman-2011, Kieburg:2017rrk}, numerical evidence has shown that very sparse matrices can also exhibit spectral characteristics of a random matrix with Gaussian distributed entries. It is the concept of universality that has made random matrices so versatile. Like in the central limit theorem, in which an infinite sum of independently and identically distributed random variables leads to a Gaussian random variable under very mild conditions, it happens that, for many spectral quantities, it does not matter how the random matrix is actually distributed. \n\t\nOver the years, random matrix theory has found many more applications in physics; for example, the local level density about Dirac points (also known as hard edges in random matrix theory) has been used to classify operators such as Hamiltonians and Dirac operators, and to discern global symmetries of a system. By global symmetries, it is meant those that are described by a linear involution (operators that square to unity) in terms of unitary and antiunitary operators. Well-known examples in physics are, time reversal, parity, charge conjugation, and chirality. Global symmetries play a central role when classifying systems in the context of quantum chaos~\\cite{Dyson1962}, in superconductors and topological insulators~\\cite{1997PhRvB..55.1142A, 2008PhRvB..78s5125S}, in quantum-chromodynamics-like theories in the continuum and on a lattice~\\cite{Verbaarschot:1994qf, Kieburg:2017rrk}, and in Sachdev-Ye-Kitaev-models (SYK)~\\cite{Garcia21, Kanazawa:2017dpd}.\n\t\n\\subsection{Local spectral statistics}\\label{sec:localspec}\n\t\nThere are two spectral scales that are usually discussed in the context of random matrix theory, and to which different kinds of universalities apply. Those are the local and the global spectral scales.\n\t\nThe microscopic or local spectral scale is given by the local mean level spacing where the fluctuations of the individual eigenvalues are resolved. This scale is often of more physical interest as it analyses the level repulsion of eigenvalues that are very close to each other. Such a level repulsion is usually algebraic for very small distances $s$. Namely, the level spacing distribution $p(s)$, which is the distribution of the distance of two consecutive eigenvalues, is of the form $s^\\beta$ (where $\\beta$ is the Dyson index) for small distances. \n\t\nWhile the symmetry of a Hamiltonian, such as time reversal, chirality, or charge conjugation, is not very important for the global spectral scale, it is very important for the local spectral statistics as it influences the value of $\\beta$. Wigner~\\cite{Wigner-surmise} derived the distribution for two-level Gaussian random matrices with Dyson index $\\beta=1$, which was soon generalized to $\\beta=2,4$,\n\\begin{equation}\n\tp(s)=2\\frac{(\\Gamma[(\\beta+2)\/2])^{\\beta+1}}{(\\Gamma[(\\beta+1)\/2])^{\\beta+2}}s^\\beta\\exp\\left[-\\left(\\frac{\\Gamma[(\\beta+2)\/2]}{\\Gamma[(\\beta+1)\/2]}\\right)^2s^2\\right]\n\\end{equation}\nwith the gamma function $\\Gamma[x]$. This distribution is nowadays called Wigner's surmise. The corresponding random matrices are known as the Gaussian orthogonal ensemble (GOE; $\\beta=1$), the Gaussian unitary ensemble (GUE; $\\beta=2$), and the Gaussian symplectic ensemble (GSE; $\\beta=4$). Those are usually compared with the level spacing distribution of independently distributed eigenvalues ($\\beta=0$), which gives the Poisson distribution\n\\begin{eqnarray}\n\tp(s)=e^{-s},\n\\end{eqnarray}\nand with the level spacing distribution of the one-dimensional quantum harmonic oscillator (also known as the picket fence statistics), which is a simple Dirac delta function\n\\begin{eqnarray}\n\tp(s)=\\delta(1-s).\n\\end{eqnarray}\nAll five benchmark distributions are shown in Fig.~\\ref{fig:level-spacing}(a).\n\t\n\\begin{figure*}[!t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figure01}\n\t\\caption{(a) The level spacing distributions of the Poisson distribution (solid line; $\\beta=0$), the Wigner surmise of the GOE (dotted line; $\\beta=1$), of the GUE (dashed line; $\\beta=2$), of the GSE (dash-dot line; $\\beta=4$), and the picket fence statistics (vertical line; $\\beta=\\infty$). (b) Three Monte Carlo simulations (symbols) of the spacing between eigenvalues $(50\\cdot M)$ and $(50\\cdot M+1)$ of the direct sum of $M$ GUEs with a matrix dimension $N=100$ (in total, the matrix dimension is $100^M\\times 100^M$), compared to the Poisson distribution (solid line), and the Wigner surmise of the GUE (dashed line). The ensemble size is $10^5$ such that the statistical error is about $1\\%$. The bin size is about $0.1$, but varies as the unfolding slightly changes their actual value.}\n\t\\label{fig:level-spacing}\n\\end{figure*}\n\t\nThe use of the Wigner surmise as a diagnostic of quantum chaos and integrability followed fundamental conjectures by BGS~\\cite{bohigas_giannoni_84} (mentioned before) and Berry and Tabor~\\cite{berry_tabor_77}, respectively. The latter states that, for an integrable bounded system with more than two dimensions and incommensurable frequencies of the corresponding tori, the spectrum should follow the Poisson statistics. However, both conjectures have to be understood with the following care as the eigenvalue spectrum must be prepared appropriately.\n\\begin{itemize}\n\t\\item[(i)] The spectrum must be split into subspectra with fixed ``good'' quantum numbers such as the spin, parity, and conserved charges. This requires knowledge of all the symmetries of the model. This step must be taken since a direct sum of independent GUE matrices can yield a level spacing distribution that resembles the Poisson statistics; see Fig.~\\ref{fig:level-spacing}(b). \n\t\\item[(ii)] One needs to unfold the spectra, meaning, that the distance between consecutive eigenvalues must be in average equal to one. This second step is crucial as only then the level spacing distributions are comparable and universal statistics can be revealed. The eigenvalue spectrum of an irregularly shaped drum, a complex molecule, and that of a heavy nucleus have completely different energy scales. After the unfolding of their spectra these scales are removed and show common behavior. Yet, the procedure of unfolding is far from trivial for empirical spectra. There are other means such as the study of the ratio between the two spacings of three consecutive eigenvalues~\\cite{Oganesyan-2007}. But this observable also has its limitations as this kind of ``automatic unfolding'' only works in the bulk of the spectrum. It fails at spectral edges and other critical points in the spectrum.\n\\end{itemize}\n\t\nIn the context of the Wigner surmise, we should stress that even though the statistics of the spectral fluctuations are well described at the level of the mean level spacing~\\cite{PhysRev.120.1698, FRENCH19715, BOHIGAS1971383} (even beyond the context of many-body systems; see, {\\it e.g.},\\ the reviews and books~\\cite{Guhr1998, mehta2004, akemann2011, haake2019} and the references therein), it was soon realized that there are statistical properties of the spectral fluctuations of many-body Hamiltonians that cannot be described using full random matrices; see Refs.~\\cite{BOHIGAS1971261, FRENCH1970449, monfrench1975, Benet:2000cy}. This is due to the fact that usually only one-, two- and maybe up to four-body interactions represent the actual physical situation. Random matrices that reflect these sparse interactions are called embedded random matrix ensembles~\\cite{monfrench1975, RevModPhys.53.385, Guhr1998, Kota2001, Kota2014}. In the past decades, they have experienced a revival due to studies of the SYK model~\\cite{1993PhRvL..70.3339S, 2016PhRvD..94j6002M, Garcia-Garcia:2016mno, Garcia-Garcia:2017pzl, Garcia-Garcia:2018fns, 2014MPAG...17..441E}, and two-body interactions~\\cite{Vyas:2018aal, 2017AIPC.1912b0003B, 2018tqrf.book..457S}. A full understanding of how these additional tensor structures, which arise naturally in quantum many-body systems, impact the entanglement of the energy eigenstates is currently missing.\n\t\n\\subsection{Global spectral statistics and eigenvector statistics}\n\t\nThe second scale is the macroscopic or global spectral scale, which is usually defined as the average distance between the largest and the smallest eigenvalues. For this scale, Wigner~\\cite{wigner_55, wigner_58} derived the famous Wigner semicircle, which describes the level density of a Gaussian distributed real symmetric matrix. He was also the first to show, again under mild conditions, that the Gaussian distribution of the independent matrix entries can be replaced by an arbitrary distribution, and nevertheless one still obtains the Wigner semicircle. One important feature of this kind of universality is that it does not depend on the symmetries of the operators. For instance, whether the matrix is real symmetric, Hermitian, or Hermitian self-dual has no impact on the level density, which is in all those cases a Wigner semicircle~\\cite{Forrester_2010}. The global spectral scale also plays a crucial role in time series analysis~\\cite{Giraud2015} and telecommunications~\\cite{Couillet2011}, where instead of the Wigner semicircle the Mar\\v{c}enko-Pastur distribution~\\cite{marcenko} describes the level density. \n\t\nThe global scale is always important when considering the so-called linear spectral statistics, meaning an observable that is of the form $\\sum_{j=1}^Nf(\\lambda_j)$, where the $\\lambda_j$ are the eigenvalues of the random matrix. This is the situation that we encounter when computing the entanglement entropy, where the $\\lambda_j$ are the eigenvalues of the density matrix; cf. Eq.~\\eqref{Neumann.entropy}. Therefore, we expect that the leading terms in the entanglement entropy are insensitive to the Dyson index $\\beta$, so that the entanglement entropy can serve as an excellent diagnostic for integrable or chaotic behavior. \n\t\nA related diagnostic for the amplitude $A$ of vector components of eigenstates is the Porter-Thomas distribution~\\cite{PhysRev.104.483}, which is used to decide whether a state is localized or delocalized. The Porter-Thomas distribution is a $\\chi^2$ distribution,\n\\begin{equation}\n\t\\mathcal{I}(A)= \\left(\\frac{\\beta N}{2}\\right)^{\\beta\/2}\\frac{A^{\\beta\/2-1}}{\\Gamma[\\beta\/2]}\\exp\\left[-\\frac{\\beta N}{2}A\\right] ,\n\\end{equation}\nwhere the normalization of the first moment is chosen to be equal to $1\/N$. Note that in the quaternion case one defines the amplitude as the squared modulus of a quaternion number. Hence, as a sum of four squared real components, similar to the squared modulus of a complex number (which is the sum of the square of the real and imaginary parts). Actually, the application of random matrices for computing the entanglement entropy is based on this idea. We can only replace a generic eigenstate by a Haar-distributed vector on a sphere after assuming that the state is delocalized. Unlike the Porter-Thomas distribution, as previously mentioned, the leading terms in the entanglement entropy are expected to be independent of the Dyson index $\\beta$ (which has yet to be proved).\n\t\nThe relation between certain quantum informational questions and random matrix theory also has a long history, and the techniques developed are diverse (see, e.g., the review~\\cite{2016JMP....57a5215C} and Chapter 37 of Ref.~\\cite{akemann2011}). Questions about generic distributions and the natural generation of random quantum states have been a focus of attention~\\cite{Hall:1998mh, 2004JPhA...37.8457S}. The answers to those questions are still debated as there are several measures of the set of quantum states and each has its benefits and flaws; for instance, two of those are based on the Hilbert-Schmidt metric and the Bures metric~\\cite{Bures1969, Hall:1998mh}. Those measures define some kind of ``uniform distribution'' on the set of all quantum states and, actually, generate random matrix ensembles that have been studied to some extent~\\cite{Hall:1998mh, 2001JPhA...34.7111Z, 2004JPhA...37.8457S, 2003JPhA...3610083S, 2010JPhA...43e5302O, 2016CMaPh.342..151F, wei2021quantum}. In this tutorial, we encounter one of the aforementioned ensembles, namely, the one related to the Hilbert-Schmidt metric, which naturally arises from a group action so that the states are Haar distributed according to this group action.\n\t\n\\subsection{Typicality and entanglement}\n\t\nAn important question that one can ask, which relates to the latest observations made in the context of random matrix ensembles, is what are the entanglement properties of typical pure quantum states. This was the earliest question to be addressed. Following work by Lubkin~\\cite{lubkin1978entropy} and Lloyd and Pagels~\\cite{lloyd1988complexity}, Page~\\cite{page1993average} obtained a closed analytical formula for the average entanglement entropy (over all pure quantum states) as a function of the system and subsystem Hilbert space dimensions. This formula was rigorously proven later in Refs.~\\cite{foong1994proof, sanchez1995simple, Sen:1996ph}. In lattice systems in which the dimension of the Hilbert space per site is finite, one can show that Page's formula results in a ``volume-law'' behavior, {\\it i.e.},\\ the entanglement entropy scales linearly in the volume $V_A$ of the subsystem, $S_A\\propto V_A$ (for a large system of volume $V$ and a subsystem with $V_A d_B\n\t\\end{cases}\n\\end{equation}\nwhere $\\Psi(x)=\\Gamma'(x)\/\\Gamma(x)$ is the digamma function. In the thermodynamic limit $V\\to \\infty$ when $V_A,V-V_A\\to\\infty$ also so that the subsystem fraction\n\\begin{equation}\n\tf=\\frac{V_A}{V}\n\\end{equation}\nis fixed, Page's formula~\\eqref{Page} reduces to\n\\begin{equation}\n\t\\braket{S_A}\\!=\\!\n\tf\\,V\\ln 2-2^{-|1-2f|V-1}+O(2^{-V})\\,,\n\t\\label{eq:Page-therm}\n\\end{equation}\nwhere we will be careful to consistently use Landau's ``big $O$'' and ``little $o$'' notation in this manuscript, such that\n\\begin{align}\n\tf(V)&=O(V^n) & \\Longleftrightarrow && \\lim_{V\\to\\infty}\\frac{f(V)}{V^n}&=c\\neq 0\\,,\\\\\n\t& & \\text{and}&&\\nonumber \\\\\n\tf(V)&=o(V^n) & \\Longleftrightarrow &&\\lim_{V\\to\\infty}\\frac{f(V)}{V^n}&=0\\,.\n\\end{align}\n\t\nThe first term in Eq.~\\eqref{eq:Page-therm} is a volume law: the average entanglement entropy scales as the minimum between the volumes $V_A=f V$ and $V_B=(1-f)V$. For $f\\neq \\frac{1}{2}$, the second term is an exponentially small correction. In fact, at fixed $f$ and in the limit $V\\to\\infty$, the second term $-2^{-|1-2f|V-1}$ becomes $-\\frac{1}{2}\\delta_{f,\\frac{1}{2}}$. We can also resolve precisely how this Kronecker delta arises in the neighborhood of $f=\\frac{1}{2}$. As it may be difficult to reach exactly $f=\\frac{1}{2}$ in physical experiments, the more precise statement is that we see the correction whenever $f=\\frac{1}{2}+O(1\/V)$. Formally, we can thus resolve the correction term exactly as $-2^{-|\\Lambda_f|-1}$ for $f=\\frac{1}{2}+\\Lambda_f\/V$, as visualized in Fig.~\\ref{fig:Page-discon}.\n\t\n\\begin{figure*}[!t]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{figure03}\n\t\\caption{The average entanglement entropy $\\braket{S_A}=a V-b+o(1)$ as a function of the subsystem fraction $f=V_A\/V$ for large $V$. (a) Leading-order behavior, also known as the Page curve. (b) The constant correction, which is given by a Kronecker delta $-\\frac{1}{2}\\delta_{f,\\frac{1}{2}}$. This Kronecker delta is resolved in (c) by carrying out a double scaling limit $V\\to\\infty$ with $f=\\frac{V_A}{V}=\\frac{1}{2}+\\frac{\\Lambda_f}{V}$.}\n\t\\label{fig:Page-discon}\n\\end{figure*}\n\t\nWe find similar Kronecker delta contributions $\\delta_{f,\\frac{1}{2}}$ in subsequent sections where we discuss the typical entropy at fixed particle number and in the setting of Gaussian states. These terms highlight nonanalyticities in the entanglement entropy that can be resolved by double scaling limits. Those ``critical points'' occur at symmetry points and along axes. In the present case, this has happened with the dimensions $d_A$ and $d_B$ reflecting whether the density operator $\\hat\\rho_A=WW^\\dagger$ or $\\hat\\rho_B=W^\\dagger W$ contains generic zero eigenvalues. \n\t\nThe variance of the entanglement entropy of a random pure state is given by the exact formula (for $d_A\\leq d_B$) \\cite{vivo_pato_16,wei2017proof,bianchi2019typical}\n\\begin{align}\n\t(\\Delta S_A)^2=&\\; \\textstyle \\frac{d_A+d_B}{d_A d_B+1}\\Psi'(d_B+1)-\\Psi'(d_A d_B+1)\\nonumber\\\\[.5em]\n\t&\\textstyle-\\frac{(d_A-1)(d_A+2d_B-1)}{4d_B^2(d_A d_B+1)}\\,,\n\t\\label{eq:PageDeltaS}\n\\end{align}\nwhere $\\Psi'(x)=\\frac{d\\Psi(x)}{dx}=\\frac{d^2[\\ln{\\Gamma(x)}]}{dx^2}$ is the first derivative of the digamma function. It can be derived using similar techniques as those outlined above for the average. In particular, the fixed trace condition can be separated as before via the trick of the Fourier-Laplace transform, such that one is left with an average over the complex Wishart-Laguerre ensemble. The derivation is tedious and lengthy because one has to deal with double sums, which can be computed as described in Appendix~\\ref{app:Gaussfixednumber}.\\footnote{Our computation of the variance for Gaussian states at fixed particle number presented in Appendix~\\ref{app:Gaussfixednumber} shows how to deal with the double sums, and can also be used in the general setting. Basically, one needs to replace the Jacobi polynomials and their corresponding weight by the Laguerre polynomials and the weight function $x^{d_B-d_A}e^{-x}$.}\n\t\nIn the thermodynamic limit discussed above, Eq.~\\eqref{eq:PageDeltaS} reduces to\n\t\\begin{equation}\n\t\t(\\Delta S_A)^2=\n\t\t\\big(\\tfrac{1}{2}-\\tfrac{1}{4}\\delta_{f,\\frac{1}{2}}\\big)\\;2^{-(1+|1-2f|) V}\\,+\\,o(2^{-(1+|1-2f|) V}). \n\t\t\\label{eq:variance-page}\n\t\\end{equation}\nThis shows that the variance is exponentially small in $V$. As a result, in the thermodynamic limit the entanglement entropy of a typical state is given by Eq.~\\eqref{eq:Page-therm} \\cite{bianchi2019typical}.\n\t\nAnew, one could resolve the variance at the critical point $f=\\frac{1}{2}$ via a double scaling limit $f=\\frac{1}{2}+\\Lambda_f\/V$. This yields $(\\Delta S_A)^2=2^{-V}2^{-2|\\Lambda_f|-1}(1-2^{-2|\\Lambda_f|-1})$.\n\t\n\\subsection{Fixed number of particles}\\label{sec:page-fixedN}\n\t\nLet us go over to a Hilbert space $\\mathcal{H}^{(N)}$ with a fixed number of particles, but still carrying over the idea to draw states uniformly from the sphere in this Hilbert space. We further assume that there is a notion of a bipartition into subsystem $A$ and $B$, such that one can specify for each particle if it is in subsystem $A$ or $B$. Such a decomposition is not a simple tensor product anymore, but it is a direct sum of tensor products\n\\begin{align}\\label{eq:Hspace-decomposition}\n\t\\mathcal{H}^{(N)}=\\bigoplus^{N}_{N_A=0}\\Big(\\mathcal{H}_A^{(N_A)}\\otimes\\mathcal{H}_B^{(N-N_A)}\\Big)\\,.\n\\end{align}\nThe direct sum is over the occupation number in $A$ (which labels the center of the subalgebra). Each summand represents those states where $N_A$ particles are in subsystem $A$ and $N-N_A$ particles are in subsystem $B$ (assuming indistinguishable particles).\n\t\nWhen $N_A$ is larger than dimension $V_A$ of subsystem $A$, or $N-N_A$ is larger than $V-V_A$, we consider the tensor product $\\mathcal{H}_A^{(N_A)}\\otimes\\mathcal{H}_B^{(N-N_A)}$ as the empty set and, thence, nonexistent. This is the case as, due to Pauli's exclusion principle, we cannot put more fermions in the system than there are quantum states. We also adapt this understanding for the following discussion where direct sums, ordinary sums, and products are reduced to the components that are actually present.\n\t\n\\subsubsection{Statistical ensemble of states}\n\t\nLet us consider fermionic creation $\\hat{f}_i^\\dagger$ and annihilation $\\hat{f}^{}_i$ operators, which satisfy the anticommutation relations $\\{\\hat{f}^{}_i,\\hat{f}_j^\\dagger\\}=\\delta_{ij}$, $\\{\\hat{f}_i,\\hat{f}_j\\}=0$ with $i,j=1,\\ldots,V$. The corresponding number operators are\n\\begin{equation}\n\t\\hat{N}=\\sum_{i=1}^V\\hat{f}_i^\\dagger \\hat{f}^{}_i\\,,\\quad \\hat{N}_A=\\sum_{i=1}^{V_A}\\hat{f}_i^\\dagger \\hat{f}^{}_i\\,,\\quad \\hat{N}_B=\\sum_{i=V_A+1}^{V}\\hat{f}_i^\\dagger \\hat{f}^{}_i\\,,\n\t\\label{eq:Hilbert-sum}\n\\end{equation}\nwhere one can see that\n\\begin{equation}\n\t\\hat{N}=\\hat{N}_A+\\hat{N}_B\\,.\n\\end{equation}\nThe Hilbert space of the system can be decomposed as a direct sum of Hilbert spaces at fixed eigenvalue $N$ of $\\hat{N}$,\n\\begin{equation}\n\t\\mathcal{H}=\\bigotimes_{i=1}^V\\mathcal{H}_i\\;=\\;\\bigoplus_{N=0}^{V}\\,\\mathcal{H}^{(N)},\n\\end{equation}\nwith $\\mathcal{H}^{(N)}$ given by Eq.~\\eqref{eq:Hspace-decomposition}. The dimension of each $N$-particle sector is\n\\begin{equation}\n\td_N=\\dim\\mathcal{H}^{(N)}\\,=\\,\\frac{V!}{N!\\,(V-N)!}\\,.\n\t\\label{eq:dN}\n\\end{equation}\nIt is immediate to check that $\\dim \\mathcal{H}=\\sum_{N=0}^V d_N=2^V$. Similarly, one can use the number operators $\\hat{N}_A$ and $\\hat{N}_B$ to decompose the Hilbert spaces $\\mathcal{H}_A$ and $\\mathcal{H}_B$ into sectors\n\\begin{equation}\n\t\\mathcal{H}_A=\\;\\bigoplus_{N_A=0}^{V_A}\\,\\mathcal{H}_A^{(N_A)}\\,,\\qquad \\mathcal{H}_B=\\;\\bigoplus_{N_B=0}^{V-V_A}\\,\\mathcal{H}_B^{(N_B)}\\,.\n\\end{equation}\nLet us stress once again that, while $\\mathcal{H}$ is a tensor product over $A$ and $B$,\n\\begin{align}\n\t\\mathcal{H}=\\left(\\bigotimes_{i=1}^{V_A}\\mathcal{H}_i\\right)\\otimes\\left(\\bigotimes_{i=V_A+1}^{V}\\mathcal{H}_i\\right)\\;=\\;\\mathcal{H}_A\\otimes \\mathcal{H}_B\\,,\n\\end{align}\nthe sector at fixed number $N\\leq V_A$ is not a tensor product. It is the direct sum of tensor products from Eq.~\\eqref{eq:Hspace-decomposition}. The corresponding dimensions of the subsystems are\n\\begin{align}\n\t\\begin{split}\\label{eq:dAdB}\n\t\t& d_A(N_A)=\\dim\\mathcal{H}_A^{(N_A)}\\,=\\,\\frac{V_A!}{N_A!\\,(V_A-N_A)!}\\,,\\\\\n\t\t& d_B(N_B)=\\dim\\mathcal{H}_B^{(N_B)}\\,=\\,\\frac{(V-V_A)!}{N_B!\\,((V-V_A)-N_B)!}\\,.\n\t\\end{split}\n\\end{align}\nOne can check that the dimensions add up correctly,\n\\begin{equation}\n\t\\sum_{N_A=0}^N d_A(N_A)\\, d_B(N-N_A)=\\frac{V!}{N!(V-N)!}=d_N\\,.\\label{eq:normalization-varrho}\n\\end{equation}\nThe formula for $d_A$, and equivalently that of $d_B$, follows from a simple counting argument of how many choices there are to place $N_A$ indistinguishable particles on $V_A$ modes. Let us underline that it does not matter what we label particles and what holes. Note that $d_A(N_A)$ or $d_B(N-N_A)$ will vanish for $N_A$ outside of the interval $[\\max(0,N+V_A-V),\\min(N,V_A)]$, but we will not truncate the sum, as we will soon turn it into a Gaussian integral.\n\t\nFrom these dimensions we can readily read off two exact symmetries:\n\t\n\\noindent (i) It does not matter whether one considers subsystem $A$ or $B$. One can exchange $(d_A(N_A),V_A,N_A) \\leftrightarrow (d_B(N-N_A),V-V_A,N-N_A)$. This allows us to restrict the discussion to $V_A\\leq V\/2$. However, the dimensions of the two Hilbert spaces are exchanged, which (as we will show) yields nonanalytic points along $V_A=V\/2$ due to the two branches of Page curve~\\eqref{Page}.\n\t\n\\noindent (ii) Additionally, there is a particle-hole symmetry since it does not matter whether one counts particles or holes. Actually, the ``particles'' do not necessarily need to represent particles but they can be, for instance, up spins while the ``holes'' are down spins (having in mind spin-$\\frac{1}{2}$ systems). Any binary structure with fermion statistics (meaning Pauli principle) can be described in this setting. Mathematically, the particle-hole symmetry is reflected in the exchange $(N,N_A)\\leftrightarrow(V-N,V_A-N_A)$. We note that in this case the dimensions are not exchanged so one does not switch branches in Page curve~\\eqref{Page}. Therefore, the symmetry points at $N=V\/2$ will be analytic, as we will also show. This symmetry allows us to restrict $N\\leq V\/2$.\n\t\n\\noindent In summary, we only need to study the behavior in the quadrant $(V_A,N)\\in(0,\\frac{V}{2}]^2$. The remaining quadrants are obtained by symmetry.\n\t\nLike in the setting in which we do not fix the particle number, we can relate the problem to random matrix theory. Here, we briefly recall the most important ingredients from Ref.~\\cite{bianchi2019typical}. A state $\\ket{\\psi}\\in\\mathcal{H}^{(N)}$ can be again written in a basis. We choose the orthonormal basis vectors $\\ket{a,N_A} \\otimes \\ket{b,N-N_A} \\in \\mathcal{H}_A^{(N_A)} \\otimes \\mathcal{H}_B^{(N-N_A)}$ so that the state vector has the expansion\n\\begin{equation}\n\t\\ket{\\psi}=\\bigoplus_{N_A=0}^N \\sum_{a=1}^{d_A}\\sum_{b=1}^{d_B} \\tilde{w}_{ab}^{(N_A)}\\ket{a,N_A}\\otimes\\ket{b,N-N_A}\n\\end{equation}\nwith the abbreviations $d_A=d_A(N_A)$ and $d_B=d_B(N-N_A)$. The normalization is then reflected by the triple sum\n\\begin{equation}\\label{norm.Page.fixed}\n\t\\sum_{N_A=0}^N\\sum_{a=1}^{d_A}\\sum_{b=1}^{d_B}|\\tilde{w}_{ab}^{(N_A)}|^2=1.\n\\end{equation}\nThe direct sum over $N_A$ is important as it tells us that the density operator $\\hat\\rho_A=\\operatorname{Tr}_{\\mathcal{H}_B}\\ket{\\psi}\\bra{\\psi}$ has a block diagonal form, namely,\n\\begin{equation}\n\t\\hat\\rho_A=\\bigoplus_{N_A=0}^N \\sum_{a_1,a_2=1}^{d_A} \\sum_{b=0}^{d_B}\\tilde{w}_{a_1b}^{(N_A)}(\\tilde{w}_{a_2b}^{(N_A)})^*\\ket{b,N_A}\\bra{a_2,N_A}.\n\\end{equation}\nAgain, we can understand the coefficients $\\tilde{w}_{ab}^{(N_A)}\\in\\mathbb{C}$ as the entries of a $d_A\\times d_B$ matrix $\\tilde{W}_{N_A}$. The point is that those matrices are coupled by condition~\\eqref{norm.Page.fixed}. In Ref.~\\cite{bianchi2019typical} those matrices were decoupled by understanding their squared Hilbert-Schmidt norms as probability weights, {\\it i.e.},\\ defining\n\\begin{equation}\n\tp_{N_A}=\\sum_{a=1}^{d_A}\\sum_{b=1}^{d_B}|\\tilde{w}_{ab}^{(N_A)}|^2\\in[0,1]\n\\end{equation}\nsuch that $\\tilde{W}_{N_A}=\\sqrt{p_{N_A}}\\,W_{N_A}$. This notation allows one to identify the density operator of subsystem $A$ with the block diagonal matrix $\\hat\\rho_A = \\mathrm{diag} (p_0W_0W_0^\\dagger, \\ldots, p_N W_NW_N^\\dagger)$, as illustrated in Fig.~\\ref{fig:RDMsketch}.\n\t\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figure04}\n\t\\caption{Sketch of the block dimensions of the reduced density matrix $\\hat\\rho_A$ of subsystem $A$ at the subsystem fraction $f=\\frac{1}{2}$. (a) Case $V=12$ at half filling $n=\\frac{1}{2}$, for which $V_A = 6$. The number of particles ranges from $N_A=0$ to $N_A=6$, with $N_A=3$ representing the largest block. (b) Case $V=20$ at quarter filling $n=\\frac{1}{4}$, for which $V_A = 10$. The number of particles ranges from $N_A=0$ to $N_A=5$, with $N_A=5$ representing the largest block. The blocks with $N_A \\geq N_{\\rm crit} = 3$ are larger than the corresponding blocks in subsystem $B$ (not shown in the figure).}\n\t\\label{fig:RDMsketch}\n\\end{figure}\n\t\nThus, the entanglement entropy becomes the sum\n\\begin{align}\n\tS_A(\\ket{\\psi})=\\sum_{N_A=0}^N&\\Big[p_{N_A}\\operatorname{Tr}(W_{N_A}W_{N_A}^\\dagger\\ln[W_{N_A}W_{N_A}^\\dagger])\\nonumber\\\\\n\t&+p_{N_A}\\ln(p_{N_A})\\Big].\n\t\\label{page.ententro.fixed}\n\\end{align}\nAnew, the symmetry between the two subsystems is reflected by the spectral decomposition theorem since it holds that $\\hat\\rho_B = \\operatorname{Tr}_{\\mathcal{H}_A}\\ket{\\psi}\\bra{\\psi} = \\mathrm{diag}(p_0W_0^\\dagger W_0,\\ldots,p_NW_N^\\dagger W_N)$.\n\t\nSince the norms are encoded in the probability weights $p_{N_A}$, each matrix $W_{N_A}W_{N_A}^\\dagger$ independently describes a fixed trace ensemble, {\\it i.e.},\\ $\\operatorname{Tr} W_{N_A}W_{N_A}^\\dagger=1$. Thus, it can be dealt with in the same way as in Page's case, in particular each of those can be traced back to a complex Wishart-Laguerre ensemble of matrix dimension $d_A\\times d_B$. The probability weights $p_{N_A}\\in[0,1]$ are also drawn randomly via the joint probability distribution~\\cite{bianchi2019typical}\n\\begin{equation}\n\t\\frac{\\delta\\left(1-\\sum_{N_A=0}^{N}p_{N_A}\\right)\\prod_{N_A=0}^Np_{N_A}^{d_Ad_B-1}dp_{N_A}}{\\int\\delta\\left(1-\\sum_{N_A=0}^{N}p_{N_A}\\right)\\prod_{N_A=0}^Np_{N_A}^{d_Ad_B-1}dp_{N_A}}.\n\\end{equation}\nThe Dirac delta function enforces condition~\\eqref{norm.Page.fixed}, while the factors $p_{N_A}^{d_Ad_B-1}$ are the Jacobians for the polar decomposition of the vectors in $\\mathcal{H}_A^{(N_A)} \\otimes \\mathcal{H}_B^{(N-N_A)}$ into their squared norm $p_{N_A}$ and the direction, which is encoded in $W_{N_A}$. The normalization of the distribution of $p_{N_A}$ was computed in Ref.~\\cite{bianchi2019typical} and can be deduced by inductively tracing the integrals over $p_{N_A}$ back to Euler's beta integrals in Eq.~(5.12.1) of Ref.~\\cite{NIST:DLMF}.\n\t\n\\subsubsection{Average and variance}\n\t\n\\begin{figure*}[t!]\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{figure05}\n\t\\caption{The leading entanglement entropy $s_A(f,n) = \\lim_{V\\to\\infty} \\braket{S_A}_N\/V$ from Eq.~\\eqref{eq:leading-general} [see Eq.~\\eqref{eq:sA-useful}]. For $n=\\frac{1}{2}$, $s_A(f,n)$ coincides with Page's result (maximal entanglement). (a) Three-dimensional plot as a function of the subsystem fraction $f=V_A\/V$ and the filling ratio $n=N\/V$. One can see the mirror symmetries $V_A\\to V-V_A$ and $N\\to V-N$. (b) Results at fixed $n$ plotted as functions of $f$. The colored lines agree in both plots so that the right one can be seen as sections of the left one along the colored lines.}\n\t\\label{fig:Page}\n\\end{figure*}\n\t\nWith these definitions and discussions, we are now ready to state the main result in Eq.~(23) of Ref.~\\cite{bianchi2019typical}: the average entanglement entropy in system $A$ of a uniformly distributed random state in $\\mathcal{H}^{(N)}$ is given by\n\\begin{align}\\label{eq:Scenter}\n\t\\begin{split}\n\t\t\\hspace{-2mm}\\braket{S_A}_N\\!&=\\!\\!\\!\\!\\sum^{\\min(N,V_A)}_{N_A=0}\\! \\frac{d_Ad_B}{d_N}\\big(\\braket{S_A}\\!+\\!\\Psi(d_N\\!+\\!1)\\!-\\!\\Psi(d_Ad_B\\!+\\!1)\\big),\n\t\\end{split}\n\\end{align}\nwhere $d_A=d_A(N_A)$ and $d_B=d_B(N-N_A)$ depend on $N_A$ according to Eq.~\\eqref{eq:dAdB} and $\\braket{S_A}$ refers to Page's result~\\eqref{Page} for given $d_A$ and $d_B$. Equation~\\eqref{eq:Scenter} follows from the average over $W_{N_A}W_{N_A}^\\dagger$ in Eq.~\\eqref{page.ententro.fixed}, which are independent fixed trace random matrices. The prefactor $d_Ad_B\/d_N$, as well as the additional digamma functions, follow from Euler's beta integral in Eq.~(5.12.1) of Ref.~\\cite{NIST:DLMF}. In particular, we have used\n\\begin{align}\n\t\\begin{split}\n\t\t\\langle p_{N_A}^\\epsilon\\rangle&=\\frac{\\int_0^1 p_{N_A}^{\\epsilon+d_Ad_B-1}(1-p_{N_A})^{d_N-d_Ad_B-1}dp_{N_A}}{\\int_0^1 p_{N_A}^{d_Ad_B-1}(1-p_{N_A})^{d_N-d_Ad_B-1}dp_{N_A}}\\\\\n\t\t&=\\frac{\\Gamma[\\epsilon+d_Ad_B]\\Gamma[d_N]}{\\Gamma[d_Ad_B]\\Gamma[\\epsilon+d_N]}\n\t\\end{split}\n\\end{align}\nfor any $\\epsilon>-d_Ad_B$. The average on the right-hand side can be obtained by rescaling $p_j\\to (1-p_{N_A})p_j$ for any $j\\neq N_A$, which decouples the average over $p_{N_A}$ with the remaining probability weights $p_j$.\n\t\nWe can write Eq.~\\eqref{eq:Scenter} as\n\\begin{equation}\n\t\\braket{S_A}_N=\\sum^N_{N_A=0}\\varrho_{N_A}\\varphi_{N_A},\n\\end{equation}\nby introducing the quantities\n\\begin{align}\n\t\\begin{split}\\label{eq:varphi}\n\t\t&\\varrho_{N_A}=\\frac{d_Ad_B}{d_N}\\,,\\\\\n\t\t&\\varphi_{N_A}\\!=\\!\n\t\t\\begin{cases}\n\t\t\t\\Psi(d_N\\!+\\!1)\\!-\\!\\Psi(d_B\\!+\\!1)\\!-\\!\\frac{d_A-1}{2d_B}&\\quad d_A\\leq d_B \\\\[0.5em]\n\t\t\t\\Psi(d_N\\!+\\!1)\\!-\\!\\Psi(d_A\\!+\\!1)\\!-\\!\\frac{d_B-1}{2d_A} &\\quad d_A> d_B\n\t\t\\end{cases}\\\\\n\t\t&=\\scriptsize\\Psi(d_N\\!+\\!1)\\!-\\!\\Psi(\\max(d_A,d_B)\\!+\\!1)\\!-\\!\\min\\left(\\tfrac{d_A-1}{2d_B},\\tfrac{d_B-1}{2d_A}\\right).\n\t\\end{split}\n\\end{align}\nThe function $\\varrho_{N_A}$ can be understood as a probability distribution of having $N_A$ particles in $A$, with the normalization $\\sum_{N_A}\\varrho(N_A)=1$ following from Eq.~\\eqref{eq:normalization-varrho}. The function $\\varphi_{N_A}$, when understood as a continuous function, has a kink at $N_{\\mathrm{crit}}$, which refers to the largest integer such that $d_A(N_\\mathrm{crit})\\leq d_B(N-N_{\\mathrm{crit}})$. There is only one situation in which $N_{\\rm crit}$ is not well defined, namely, when $V_A=N=V\/2$ or, equivalently, when $f=n=\\frac{1}{2}$ with $f=V_A\/V$ and $n=N\/V$. Then it always holds that $d_A(N_A)=d_B(N-N_A)$ for all $N_A=0,\\ldots,N$. In this case, we do not need an $N_{\\rm crit}$ as the terms in both sums are the same.\n\t\nWe are unable to evaluate this sum exactly, but we can expand $\\braket{S_A}_N$ in powers of $V$ and approximate the sum by an integral\n\\begin{align}\\label{eq:average-int}\n\t\\hspace{-2mm}\\braket{S_A}_N\\!=\\!\\!\\sum^N_{N_A=0}\\!\\!\\varrho_{N_A}\\varphi_{N_A}\\!=\\!\\int^{\\infty}_{-\\infty} \\!\\!\\!\\!\\!\\!\\!\\varrho(n_A)\\varphi(n_A)dn_A\\!+\\!o(1),\n\\end{align}\nwhere $\\varrho(n_A)$ is the saddle point approximation of $V\\varrho_{n_AV}=Vd_Ad_B\/d_N$, which represents the probability distribution for the intensive variable $n_A=N_A\/V$. This is enough for computing the leading orders without double scaling. We find the normal distribution\n\\begin{align}\\label{Gauss.approx.Page}\n\t\\varrho(n_A)=\\frac{1}{\\sigma \\sqrt{2\\pi}}\\exp\\left[-\\frac{1}{2}\\left(\\frac{n_A-\\bar{n}_A}{\\sigma}\\right)^2\\right]+o(1)\n\\end{align}\nwith mean $\\bar{n}_A=fn$ and variance $\\sigma^2=f(1-f)n(1-n)\/V$.\n\t\n\\begin{figure*}\n\t\\centering \n\t\\includegraphics[width=\\linewidth]{figure06}\n\t\\caption{The entanglement entropy $\\braket{S_A}_N$ from Eq.~\\eqref{eq:leading-general} as viewed from the contributions of the first three terms in the expansion in $V$. (a)--(c) Three-dimensional plots as functions of the subsystem fraction $f=V_A\/V$ and the filling ratio $n=N\/V$. (d) Resolving the expansion coefficient $b$ for $f=\\frac{1}{2}+\\frac{\\Lambda_f}{\\sqrt{V}}$ around $f=\\frac{1}{2}$, as given by Eq.~\\eqref{eq:squareroot-full-Gaussian-half-system}, approaching zero for large $|\\Lambda_f|$. (e) Resolving the expansion coefficient $c$ for $n=\\frac{1}{2}+\\frac{\\Lambda_{n}}{\\sqrt{V}}$ and $f=\\frac{1}{2}+\\frac{\\Lambda_f}{V}$ around $f=n=\\frac{1}{2}$, as given by Eq.~\\eqref{eq:constant-full-Gaussian-half}, approaching $\\frac{2\\ln2-1}{4}$ for large $|\\Lambda_f|$ or $|\\Lambda_{n}|$. We underline that the subleading contributions are multiplied by a minus sign.}\n\t\\label{fig:general-N-visual}\n\\end{figure*}\n\t\nIn Appendix~\\ref{app:Ncrit}, we carefully analyze the difference $\\delta n_{\\mathrm{crit}}=n_{\\mathrm{crit}}-\\bar{n}_A$ for $n_{\\mathrm{crit}}=N_{\\mathrm{crit}}\/V$ and find that, for fixed $f<\\frac{1}{2}$, one always has $\\delta n_{\\mathrm{crit}}=O(1)$ and $\\delta n_{\\mathrm{crit}}>0$. Thus, for $f\\neq\\frac{1}{2}$, the center of the Gaussian $\\bar{n}_A$ is sufficiently separated from $n_{\\mathrm{crit}}$. This allows us to disregard the second sum in Eq.~\\eqref{eq:Scenter} as it is exponentially suppressed. In the case that $f>\\frac{1}{2}$, we can disregard the first sum because of the symmetry between the two subsystems $A$ and $B$.\n\t\nTo find the observable $\\varphi(n_A)$ from Eq.~\\eqref{eq:varphi}, we use Stirling's approximation\n\\begin{align}\\label{approx.Digamma}\n\t\\Psi[d_N\\!+\\!1]\\!-\\!\\Psi[\\max(d_A,d_B)\\!+\\!1]&=\\ln\\min\\left(\\tfrac{d_N}{d_B},\\tfrac{d_N}{d_A}\\right)\\!+\\!o(1).\n\\end{align}\nMoreover, it holds for $V\\gg1$ and fixed $f\\in(0,1)$\n\\begin{align}\n\t\\min\\left(\\tfrac{d_A-1}{d_B},\\tfrac{d_B-1}{d_A}\\right)&=\\delta_{f,\\frac{1}{2}}\\delta_{n,\\frac{1}{2}}+o(1).\n\\end{align}\nThe Kronecker-delta is, in fact, a ``relic'' of a double scaling limit, see Figs.~\\ref{fig:Page-discon}(b) and~\\ref{fig:Page-discon}(c) for a similar result in the context of Page's setting without fixed particle number. It can be resolved assuming that $f$ is close to $1\/2$ but not exactly at $1\/2$, see Appendix~\\ref{app:general-average}. When collecting all terms up to order $O(1)$, we obtain\n\\begin{align}\\label{eq:psi}\n\t\\begin{split}\n\t\t\\varphi(n_A)&=[n_A\\ln(n_A)-f\\ln(f)-n\\ln[(1-n)\/n]\\\\\n\t\t&\\quad-\\ln(1-n)+(f-n_A)\\ln(f-n_A)]V\\\\\n\t\t&\\quad+ \\frac{1}{2}\\ln\\left[\\frac{n_A (f-n_A)}{f(1-n)n}\\right]-\\frac{1}{2}\\delta_{f,\\frac{1}{2}}\\delta_{n,\\frac{1}{2}}+o(1)\\,,\n\t\\end{split}\n\\end{align}\nfor $n_A\\geq n_{\\rm crit}$. For $n_A\\leq n_{\\rm crit}$, we need to apply the symmetries $n_A\\to n-n_A$ and $f\\to 1-f$ in expansion~\\eqref{eq:psi}.\n\t\nIn the limit $V\\to\\infty$, Gaussian~\\eqref{Gauss.approx.Page} narrows because the standard deviation scales like $\\sigma\\sim1\/\\sqrt{V}$. We can, therefore, expand $\\varphi(n_A)$ in powers of $(n_A-\\bar{n}_A)$ around the mean $\\bar{n}_A$. In order to find the average up to a constant order, it suffices to expand up to the quadratic order and then calculate integral~\\eqref{eq:average-int}. Only for $f=\\frac{1}{2}$, we have $\\delta n_{\\mathrm{crit}}=o(1)$, so that we need to take into account the nonanalyticity in $\\varphi(n_A)$ introduced by the symmetry when exchanging the two subsystems. In this case, we integrate two different Taylor expansions for $n_A\\leq n\/2$ and $n_A\\geq n\/2$, which will introduce a term of order $\\sqrt{V}$, as discussed below.\n\t\nCombining these results, we arrive at the main result of this subsection,\n\\begin{align}\\label{eq:leading-general}\n\t\\langle S_A\\rangle_{N}&=[(n-1)\\ln(1-n)-n\\ln(n)]\\, f\\,V\\nonumber\\\\\n\t&-\\sqrt{\\frac{n(1-n)}{2\\pi}}\\left|\\ln\\left(\\frac{1-n}{n}\\right)\\right|\\delta_{f,\\frac{1}{2}}\\sqrt{V}\\nonumber\\\\\n\t&+\\frac{f+\\ln(1-f)}{2}-\\frac{1}{2}\\delta_{f,\\frac{1}{2}}\\delta_{n,\\frac{1}{2}}+o(1),\n\\end{align}\nvalid for $f\\leq \\frac{1}{2}$. The leading, volume-law, term in Eq.~\\eqref{eq:leading-general} is the same as that obtained in Refs.~\\cite{garrisson_grover_18, vidmar2017entanglement2} using random matrix theory, and the same as in Ref.~\\cite{bianchi2019typical} [see Eq.~(27)], where it is interpreted as the typical entanglement entropy in the (highly degenerate) eigenspace of a Hamiltonian of the form $\\hat{H}=\\hat{N}=\\hat{N}_A+\\hat{N}_B$. The subleading $\\sqrt{V}$ term was first discussed in Ref.~\\cite{vidmar2017entanglement2}, specifically, it coincides with the bound for such a term computed at $f=\\frac{1}{2}$~\\cite{vidmar2017entanglement2}. It is remarkable that, for $n\\neq\\frac{1}{2}$, the constant term is nothing but that obtained in Ref.~\\cite{vidmar2017entanglement2} within a ``mean field'' calculation, while at $n=f=\\frac{1}{2}$ the extra $-\\frac{1}{2}$ correction was found in Ref.~\\cite{vidmar2017entanglement2} numerically, both for random states as well as for eigenstates of a nonintegrable Hamiltonian. We had all the ingredients to guess the general form in Eq.~\\eqref{eq:leading-general}. Its actual derivation with all the details fills several pages, and can be found in Appendix~\\ref{app:general-average}. A visualization of the leading term in Eq.~\\eqref{eq:leading-general} can be found in Fig.~\\ref{fig:Page}.\n\t\nAn important question concerns the resolution of the Kronecker deltas in Eq.~\\eqref{eq:leading-general}, which indicate nontrivial scaling limits. The Kronecker deltas are only obtained along the critical line $f=\\frac{1}{2}$, which contains a multicritical point at $n=\\frac{1}{2}$ when $V\\to\\infty$. One needs to take the resolution into account because experiments are carried out in finite systems in which $f$ and $n$ can only be fixed within some experimental resolution. Consequently, it is important to understand within which margin of error one needs to choose $f$ and $n$ to observe the corresponding terms. This question can be answered by analyzing the limit $V\\to\\infty$ in the double scaling $f=\\frac{1}{2}+V^{-\\alpha} \\Lambda_f$ and\/or $n=\\frac{1}{2}+V^{-\\beta}\\Lambda_{n}$. We find that the $\\sqrt{V}$ correction in Eq.~\\eqref{eq:leading-general} (for fixed $n$) becomes visible for $\\alpha=\\frac{1}{2}$, {\\it i.e.},\\ whenever the difference between $f$ and $\\frac{1}{2}$ is of order $1\/\\sqrt{V}$ or smaller. The constant correction requires a more detailed analysis as it depends on the relative scaling of both $f$ and $n$ around $f=n=\\frac{1}{2}$. Subtle cancelations have to be taken into account as not all sources of corrections, such as $N_{\\rm crit}$, approximation~\\eqref{approx.Digamma}, or the rewriting of the sum as an integral, are equally important; see Appendix~\\ref{app:general-average}. The visualization of the terms in Eq.~\\eqref{eq:leading-general} that include Kronecker deltas, as well as their scaling, is presented in Fig.~\\ref{fig:general-N-visual}.\n\t\nThe variance $(\\Delta S_A)^2_{N}={\\langle S_A^2\\rangle}_{N}-{\\langle S_A\\rangle}^2_{N}$ of the entanglement entropy of pure quantum states in $\\mathcal{H}^{(N)}$ can be found using the result in Eq.~(24) of Ref.~\\cite{bianchi2019typical}. When expressed as a sum over the number of particles $N_A$, it takes the form\n\\begin{equation}\n\t(\\Delta S_A)^2_{N}=\\frac{1}{d_N+1}\\Big[\\!\\!\\sum_{N_A=0}^{N}\\!\\!\\varrho\\;(\\varphi^2_{N_A}\\!+\\!\\chi_{N_A}\\big)\\!-\\!\\big(\\!\\!\\sum_{N_A=0}^{N}\\!\\!\\varrho_{N_A}\\;\\varphi_{N_A}\\big)^2\\Big],\n\t\\label{eq:DSA2N}\n\\end{equation}\nwhere $\\varrho_{N_A}$ and $\\varphi_{N_A}$ are given in Eq.~\\eqref{eq:varphi} and $\\chi_{N_A}$ is defined as\n\\begin{align}\n\t\\chi\\!=\\!\\!\n\t&\\begin{cases}\n\t\t\\scriptstyle\\!\\! (d_A\\!+d_B)\\Psi'\\!(d_B+1)-(d_N\\!+1)\\Psi'\\!(d_N+1)-\\frac{(d_A\\!-\\!1)(d_A\\!+2d_B\\!-1)}{4d_B^2},\n\t\t&\\!\\!\\!\\!\\scriptstyle\\!\\! d_A\\leq d_B, \\\\[0.8em]\n\t\t\\scriptstyle\\!\\! (d_A\\!+d_B)\\Psi'\\!(d_A+1)-(d_N\\!+1)\\Psi'\\!(d_N+1)-\\frac{(d_B\\!-\\!1)(d_B\\!+2d_A\\!-1)}{4d_A^2},\n\t\t&\\!\\!\\!\\!\\scriptstyle\\!\\! d_A> d_B.\n\t\\end{cases}\n\\end{align}\nAs earlier, $d_N$, $d_A(N_A)$, $d_B(N-N_A)$ are understood as functions of the particle number and are given by Eqs.~\\eqref{eq:dN} and~\\eqref{eq:dAdB}. In the thermodynamic limit $V\\to\\infty$, at fixed subsystem fraction $f=V_A\/V$ and fixed particle density $n=N\/V$, the variance is exponentially small and its asymptotic scaling can be obtained via the saddle point methods of Appendix \\ref{app:general-average}. In particular, we have\n\\begin{align}\n\t&\\sum_{N_A=0}^{N}\\!\\!\\varrho_{N_A}\\;\\varphi^2_{N_A}\\,-\\,\\Big(\\!\\!\\sum_{N_A=0}^{N}\\!\\!\\varrho_{N_A}\\;\\varphi_{N_A}\\Big)^2 =\\\\\n\t&\\quad=\\!\\int^{\\infty}_{-\\infty} \\!\\!\\!\\!\\!\\!\\!\\varrho(n_A)\\varphi^2(n_A)dn_A-\\Big(\\int^{\\infty}_{-\\infty} \\!\\!\\!\\!\\!\\!\\!\\varrho(n_A)\\varphi(n_A)dn_A\\Big)^2\\!+\\!o(1)\\nonumber\\\\[.5em]\n\t&\\quad=\\big[f(1\\!-\\!f)-\\frac{1}{2\\pi}\\delta_{f,\\frac{1}{2}}\\big]\\big(\\!\\ln \\frac{n}{1\\!-n}\\big)^2 \\,n(1\\!-\\!n)\\, V+o(V)\\nonumber,\n\\end{align}\nand\n\\begin{equation}\n\t\\sum_{N_A=0}^{N}\\!\\!\\varrho_{N_A}\\chi_{N_A}=\\frac{1}{4}\\delta_{f,\\frac{1}{2}}\\delta_{n,\\frac{1}{2}}+o(1)\\,,\n\\end{equation}\nwhere we have used the fact that, for large dimensions, $d_A\\gg 1$ and $d_B\\gg 1$, $\\chi$ scales as\n\\begin{align}\n\t\\chi_{N_A}=\n\t\\begin{cases}\n\t\t\\frac{d_A}{2d_B}+O(1\/d_B^2)\\,, & d_A< d_B \\\\\n\t\t\\frac{1}{4} +o(1)\\,, & d_A= d_B \\\\\n\t\t\\frac{d_B}{2d_A}+O(1\/d_A^2)\\,, & d_A> d_B \\,.\n\t\\end{cases}\n\t\\label{eq:chi-asympt}\n\\end{align}\nTherefore, the term in brackets in Eq.~\\eqref{eq:DSA2N} is of order $V$, while the denominator $d_N+1$ is exponentially large. Using the Stirling approximation for $d_N$ in Eq.~\\eqref{eq:DSA2N}, we find that\n\\begin{equation}\n\t(\\Delta S_A)^2_{N}= \\alpha\\, V^{\\frac{3}{2}}\\operatorname{e}^{-\\beta V}+ o(\\operatorname{e}^{-\\beta V}),\n\t\\label{eq:DeltaS-N}\n\\end{equation}\nwith\n\\begin{align}\n\t\\alpha=&\\,\\scriptstyle\\sqrt{2\\pi} \\big[f(1-f)-\\frac{1}{2\\pi}\\delta_{f,\\frac{1}{2}}\\big]\\left(\\ln\\! \\frac{n}{1-n}\\right)^2\\,[n(1\\!-\\!n)]^{\\frac{3}{2}}\\, +\\,o(1)\\nonumber\\\\[.5em]\n\t\\beta=&-n\\ln n-(1-n)\\ln(1-n)\\,.\n\\end{align}\nThis means that the average entanglement entropy in Eq.~\\eqref{eq:leading-general} is also the typical entanglement entropy of pure quantum states with $N$ fermions, namely, the overwhelming majority of pure quantum states with $N$ fermions have the entanglement entropy in Eq.~\\eqref{eq:leading-general}.\n\t\n\\subsubsection{Weighted average and variance}\\label{sec:general-mu}\n\t\nHaving computed the average entanglement entropy of pure states with $N$ particles, next we can compute the average over the entire Hilbert space. A subtlety is that the system is in either of the Hilbert spaces $\\mathcal{H}_N$, but we do not know in which one. Therefore, while the distribution of the pure states with a fixed particle number is given quantum mechanically, meaning uniformly distributed over a unit sphere, we additionally have a classical probability for the particle number $N$. \n\t\nWith this in mind, we can average over $\\braket{S_A}_N$ within each sector with $N$ particles weighted by its Hilbert space dimension $d_N$ from Eq.~\\eqref{eq:dN}. More generally, we can introduce a weight parameter $w$ and a probability $P_N$ of finding $N$ particles:\n\\begin{align}\n\tP_N=\\frac{1}{Z}d_N \\operatorname{e}^{-w N}.\n\t\\label{eq:PN-binomial}\n\\end{align}\nHere $Z=\\sum_{N=0}^V d_N \\operatorname{e}^{-w N}=(1+\\operatorname{e}^{-w})^V$ normalizes the distribution. The average filling fraction $\\bar{n}$ can be expressed in terms of the weight parameter $w$ as\n\\begin{align}\n\t\\bar{n}=\\sum_{N=0}^V P_N \\frac{N}{V}\\;=\\;\\frac{1}{1+\\operatorname{e}^w}\\,\n\t\\label{eq:nbar}\n\\end{align}\nwith half-filling $\\bar{n}=\\frac{1}{2}$ corresponding to equiweighted sectors, {\\it i.e.},\\ $w=0$. The variance of the filling fraction,\n\\begin{align}\n\t(\\Delta n)^2=\\sum_{N=0}^V P_N\\; (\\frac{N}{V}-\\bar{n})^2\\;=\\;\\frac{\\bar{n}(1-\\bar{n})}{V}\\,\n\t\\label{eq:Dn}\n\\end{align}\ncan be obtained easily by noting that $P_N$ is a binomial distribution.\n\t\nWe calculate the average entanglement entropy at fixed weight parameter $w$,\n\\begin{align}\n\t\\braket{S_A}_w=\\sum_{N=0}^V P_N \\braket{S_A}_N\\,,\n\t\\label{eq:Sw-def}\n\\end{align}\nup to constant order in $V$ by expanding $\\braket{S_A}_N$ around $\\bar{n}$ and then using the known variance $(\\Delta n)^2$. Since $\\braket{S_A}_N$ is analytic as a function of $N$ (for $f<\\frac{1}{2}$) and does not have any discontinuities in its derivatives, it suffices to expand its leading order (linear in $V$) around $\\bar{n}$ as\n\\begin{align}\n\t\\begin{split}\\label{eq:sA-useful}\n\t\ts_A(f,n)&=[(n-1)\\ln(1-n)-n\\ln{n}]f\\\\\n\t\t&=[(\\bar{n}-1)\\ln(1-\\bar{n})-\\bar{n}\\ln{\\bar{n}}]f\\\\\n\t\t&\\quad+f\\ln[(1-\\bar{n})\/\\bar{n}]-\\frac{f(n-\\bar{n})^2}{2(1-\\bar{n})\\bar{n}}\\\\\n\t\t&\\quad+o(n-\\bar{n})^3,\n\t\\end{split}\n\\end{align}\nand calculate its expectation value with respect to the binomial distribution. Using $\\braket{(n-\\bar{n})^2}=\\sigma^2$, we find the constant correction $-\\frac{f}{2}$, which cancels the identical term in Eq.~\\eqref{eq:leading-general}. Terms of order $V^{1\/2}$ and $V^0$ can be directly evaluated at $n=\\bar{n}$, where the binomial distribution is centered, because its finite width on those terms will only contribute corrections of subleading order $O(1)$. Hence, the resulting average is equal to\n\\begin{align}\\label{eq:Page-weighted}\n\t\\begin{split}\n\t\t\\braket{S_A}_{w}&=\\left[(\\bar{n}-1)\\ln(1-\\bar{n})-\\bar{n}\\ln(\\bar{n})\\right] fV\\\\\n\t\t&\\quad-\\sqrt{\\frac{\\bar{n}(1-\\bar{n})}{2\\pi}}\\left|\\ln\\left(\\frac{1-\\bar{n}}{\\bar{n}}\\right)\\right|\\delta_{f,\\frac{1}{2}}\\sqrt{V}\\\\\n\t\t&\\quad+\\frac{\\ln(1-f)}{2}-\\frac{2}{\\pi}\\,\\delta_{f,\\frac{1}{2}}\\delta_{\\bar{n},\\frac{1}{2}}+o(1)\\,,\n\t\\end{split}\n\\end{align}\nwhere $\\bar{n}=1\/(1+e^{w})$ was computed in Eq.~\\eqref{eq:nbar}. A pedagogical derivation of Eq.~\\eqref{eq:Page-weighted} can be found in Appendix~\\ref{app:general-weighted}. Interestingly, Eq.~\\eqref{eq:Page-weighted} can be summarized by the simple relation $\\braket{S_A}_{w} = \\braket{S_A}_{N=\\bar{N}} - \\frac{f}{2}+o(1)$ except at $f=\\bar{n}=\\frac{1}{2}$, where the Kronecker delta from Eq.~\\eqref{eq:leading-general} leads to additional integrals, as explained in Appendix~\\ref{app:general-weighted}.\n\t\nFor $w=0$ with $\\bar{n}=\\frac{1}{2}$, Eq.~\\eqref{eq:Page-weighted} describes the average entanglement entropy of uniformly weighted eigenstates of the number operator (with respect to the Haar measure). This average was computed in Ref.~\\cite{huang_19} as $\\braket{S_A}_{w=0}=f V \\ln{2}+\\frac{\\ln(1-f)}{2}-\\frac{2}{\\pi}\\delta_{f,1\/2}$, which coincides with Eq.~\\eqref{eq:Page-weighted} for $\\bar{n}=\\frac{1}{2}$.\n\t\nSimilarly, one can compute the variance of the weighted entanglement entropy\n\\begin{align}\n\t\\begin{split}\n\t\t(\\Delta S_A)^2_w&=\\sum_{N=0}^V P_N\\, \\langle S_A^2\\rangle_N-\\big(\\sum_{N=0}^V P_N\\, \\langle S_A\\rangle_N\\big)^2\\\\\n\t\t&=\\bar{n}(1-\\bar{n})\\big(\\ln \\frac{\\bar{n}}{1-\\bar{n}}\\big)^2 f V+o(V)\\,.\n\t\t\\label{eq:DeltaS-w}\n\t\\end{split}\n\\end{align}\nNote that, while the variance $(\\Delta S)^2_N$ at a fixed number of particles is exponentially small at large $V$, the weighted variance $(\\Delta S)^2_w$ scales linearly in $V$ because of the $O(V^{-1})$ variance $(\\Delta n)^2$ in the filling fraction. For $f\\neq0$ and $\\bar{n}\\neq 0$, the leading-order term only vanishes at $\\bar{n}=\\frac{1}{2}$. However, we always have $\\lim_{V\\to\\infty}(\\Delta S_A)_w\/\\braket{S_A}_w=0$, {\\it i.e.},\\ the \\emph{relative standard deviation} vanishes in the thermodynamic limit, so that the average entanglement entropy $\\braket{S_A}_w$ and the \\emph{typical} eigenstate entanglement entropy always coincide.\n\t\n\t\n\\section{PURE FERMIONIC GAUSSIAN STATES} \\label{sec:gaussian}\n\t\nIn this section, we define fermionic Gaussian states and calculate the average and variance of the entanglement entropy for this family of states. Following Ref.~\\cite{bianchi2021page}, we do this first for pure fermionic Gaussian states, for which the number of particles is not fixed. Next, we derive new results for fermionic Gaussian states with a fixed number of particles. In both cases we mimic the idea of a uniformly distributed state. This works because in both cases there is a natural action of a compact group and the set is given by a single orbit of this group action. Thus, one can choose the unique Haar measure to generate an ensemble of fermionic Gaussian states.\n\t\nIt may be natural to ask whether the same analysis could also be carried out for bosonic Gaussian states. Unfortunately, the answer is in the negative. The ensemble of bosonic Gaussian states is noncompact with unbounded entanglement entropy since the corresponding invariance group is a noncompact one. So any group invariant average would diverge. Moreover, the only bosonic Gaussian state that has a fixed particle number is the vacuum with zero particles and zero entanglement. To circumvent the problem, one could fix the \\emph{average} number of particles. Then, the corresponding manifold would be again compact and one can average over all those Gaussian states (in a similar spirit as in Refs.~\\cite{serafini2007canonical, fukuda2019typical}), but the resulting analysis would be rather different from our approach here. It may be possible to use a duality between bosonic and fermionic entanglement entropy of Gaussian states~\\cite{jonsson2021entanglement} for this, but we will not carry out this analysis here.\n\t\n\\subsection{Definition of fermionic Gaussian states}\n\t\nInstead of starting with pure fermionic Gaussian states, it is easier to begin with mixed Gaussian states because the pure ones can be understood as limits of this definition. We choose a Majorana basis $\\{\\gamma_j\\}_{j=1,\\ldots,2V}$ in the $2^V$-dimensional Hilbert space $\\mathcal{H}$ since the corresponding ensemble is easier to describe. This Majorana basis satisfies the anticommutation relation $\\{\\gamma_j,\\gamma_k\\}=\\delta_{jk}$, meaning that they create a Clifford algebra and can be chosen to be Hermitian, $\\gamma_j^\\dagger=\\gamma_j$. Moreover, it holds that $\\operatorname{Tr}\\left(\\prod_{l=1}^m\\gamma_{j_l}\\right)=0$ with $j_{l}\\in\\{1,\\ldots,V\\}$ and any positive integer $m$ whenever there is a $\\gamma_j$ that does not appear in this product with an even order. Otherwise, it holds that $\\operatorname{Tr}\\left(\\prod_{l=1}^m\\gamma_{j_l}\\right)=\\pm2^{V-m\/2}$, which is up to a factor $2^{-m\/2}$ the dimension of the representation of the Clifford algebra as well as the dimension of the Hilbert space $\\mathcal{H}$.\n\t\nA Gaussian state is then any density operator of the form\n\\begin{equation}\n\t\\hat\\rho(\\gamma)=\\frac{\\exp(-\\sum_{j,k=1}^{2V}q_{jk}\\gamma_j\\gamma_k)}{\\operatorname{Tr} \\exp(-\\sum_{j,k=1}^{2V}q_{jk}\\gamma_j\\gamma_k)}=\\frac{\\exp(-\\gamma^\\dagger Q\\gamma)}{\\operatorname{Tr} \\exp(-\\gamma^\\dagger Q\\gamma)}\n\\end{equation}\nwith the Majorana operator-valued column vector $\\gamma = (\\gamma_1, \\ldots, \\gamma_{2V})^\\dagger$. This form gives the Gaussian states their name. The Hermiticity of $\\hat\\rho(\\gamma)$ implies that the coefficient matrix $Q=\\{q_{jk}\\}_{j,k=1,\\ldots,2V_A}$ needs to be Hermitian, while the anticommutation relations of the Majorana basis allows us to set the real symmetric part to zero. Indeed, due to\n\\begin{equation}\n\t\\begin{split}\n\t\t\\sum_{j,k=1}^{2V}q_{jk}\\gamma_j\\gamma_k&=\\sum_{j=1}^{2V}q_{jj}+\\sum_{1\\leq j\\bar{n}\n\t\\end{cases}\\nonumber\n\\end{align}\nwith $f\\leq \\frac{1}{2}$. Note that, at $w=0$ (corresponding to $\\bar{n}=\\frac{1}{2}$), the leading-order $O(V)$ term vanishes. In general, we have $\\lim_{V\\to \\infty} (\\Delta S_A)_{\\mathrm{G},w}\/\\braket{S_A}_{\\mathrm{G},w}=0$, which shows that in the thermodynamic limit the average \\eqref{eq:expansion} also gives the typical value of the entanglement entropy.\n\n\t\n\\section{EXACT RELATION TO RANDOM HAMILTONIANS}\\label{sec:RMT}\n\t\nSo far, we have focused on ensembles of quantum states and computed statistical properties of the entanglement entropy with respect to the following six ensembles: (1a) random states, (2a) random states with fixed total particle number, (3a) weighted averages over random states with fixed total particle number, (1b) random fermionic Gaussian states, (2b) random fermionic Gaussian states with fixed total particle number, and (3b) weighted averages over random fermionic Gaussian states with fixed total particle number. In this section, we shift the focus from ensembles of quantum states to random Hamiltonians, their eigenstates, and their dynamics.\n\t\n\\subsection{Random many-body Hamiltonians}\\label{sec:rig-res-rmt}\n\t\nEnsembles (1a), (2a), and (3a) can be realized using eigenstates (even only ground states) of random Hamiltonians that are traditional random matrices. The ensuing Hamiltonians give an \\emph{exact} correspondence to Page's setting, {\\it i.e.},\\ the averages and variances will agree at all orders (meaning even at finite $V$) when the respective random Hamiltonian satisfies the properties discussed next.\n\t\nWe first consider case (1a), for which the number of particles is not fixed. The state vector in this case explores the entire sphere of the Hilbert space $\\mathcal{H}$. Thus, any random Hamiltonian that creates a Haar-distributed random state vector is suitable. For instance, let us study the random-matrix Hamiltonian\n\\begin{equation}\n\t\\hat{H}_\\text{1a}=\\sum^{2^V}_{\\kappa,\\lambda=1}C_{\\kappa\\lambda}\\ket{v_\\kappa}\\bra{v_\\lambda},\n\\end{equation}\nwhere $\\ket{v_\\lambda}$ is an orthonormal basis of the Hilbert space and $C_{\\kappa\\lambda}$ is a Haar-distributed random matrix. To get Haar-distributed eigenvectors, the diagonalization $C=U^\\dagger E U$ must involve random matrices $U$ drawn from the Haar measure of ${\\rm U}(2^V)$, while the distribution of the eigenvalues appearing in the diagonal matrix $E$ can be arbitrary. A simple, and one of the most common examples of such a distribution for $C$ is given by the GUE~\\cite{mehta2004, Forrester_2010, akemann2011},\n\\begin{equation}\\label{GUE-dist}\n\t\\begin{split}\n\t\tP(\\hat{H}_\\text{1a})=&2^{-2^{V-1}}\\pi^{-2^{2V-1}}\\exp\\left[-\\frac{1}{2}\\sum_{\\kappa,\\lambda=1}^{2^V}|C_{\\kappa\\lambda}|^2\\right]\\\\\n\t\t=&2^{-2^{V-1}}\\pi^{-2^{2V-1}}e^{-\\operatorname{Tr}\\hat{H}_\\text{1a}^2\/2}.\n\t\\end{split}\n\\end{equation}\n\t\nTo relate the Hamiltonian $\\hat{H}_\\text{1a}$ to many-body Hamiltonians, we rewrite it into a polynomial in fermionic creation and annihilation operators\n\\begin{align}\\label{H1a}\n\t\\hat{H}_\\text{1a}\\;\\;&=\\sum_{l=0}^{2V}\\sum_{j_1,\\ldots,j_l=1}^{2V}c^{(l)}_{j_1\\ldots j_l}\\,\\hat{\\xi}_{j_1}\\cdots\\hat{\\xi}_{j_l},\n\\end{align}\nwith $\\{\\hat{\\xi}_j\\}_{j=1,\\ldots,2V} = (\\hat{f}_1, \\dots, \\hat{f}_V, \\hat{f}_1^\\dagger, \\dots, \\hat{f}_V^\\dagger)$. The coefficients $c^{(l)}_{j_1,\\ldots,j_{l}}$ satisfy symmetries that reflect the anticommutation relations, $\\{\\hat{f}_k,\\hat{f}_l\\} = \\{\\hat{f}_k^\\dagger,\\hat{f}_l^\\dagger\\}=0$ and $\\{\\hat{f}_k,\\hat{f}_l^\\dagger\\} = \\delta_{kl}$, the Hermiticity of $\\hat{H}_\\text{1a}$, and the fact that in each sum over $c^{(l)}_{j_1,\\ldots,j_{l}}$ there are exactly $l$ operators involved that cannot be reduced to a smaller order of a many-body interaction. Exploiting the unitary matrix $T$ in Eq.~\\eqref{Tdef}, in particular going into a Majorana basis, shows that $\\tilde{c}^{(l)}_{k_1,\\ldots,k_{l}}=\\sum_{j_1,\\ldots,j_l=1}^{2V}c^{(l)}_{j_1,\\ldots,j_{l}}\\prod_{a=1}^lT_{j_a k_a}$ is totally skew symmetric in the indices and is real when $l(l-1)\/2$ is even and imaginary when $l(l-1)\/2$ is odd.\n\t\nThe statistical distribution of the coefficients $c^{(l)}_{j_1,\\ldots,j_{l}}$ is determined by the distribution of matrix $C_{\\mu\\nu}$. The best way to see this is to go into the Majorana basis $\\gamma_1,\\ldots,\\gamma_{2V}$ via relation~\\eqref{gammafrel}. Then, one needs to take into account the normalization $\\gamma_j^2=\\tfrac{1}{2}\\mathbb{1}_{2^V}$ to determine this distribution, which leads to\n\\begin{equation}\n\t\\begin{split}\n\t\tP(\\hat{H}_\\text{1a})&=\\prod_{l=1}^{2V}\\prod_{1\\leq j_1<\\ldots0$ is also possible, but must be largely done by hand, {\\it i.e.},\\ we would organize the eigenstates of a random Hamiltonian based on their particle number and then choose one at random using the statistical weight encoded by $w$.\n\t\nMany-body interacting Hamiltonians studied in nuclear physics~\\cite{monfrench1975, FRENCH1970449, BOHIGAS1971261, BOHIGAS1971383, PhysRev.120.1698, FRENCH19715} are related to these kinds of Hamiltonians. They, as well as the SYK models, are called embedded random matrices~\\cite{RevModPhys.53.385, Guhr1998, Benet:2000cy, Kota2001, Kota2014}. For instance, for a $q$-body Hamiltonian, we set $c^{(l)}_{j_1\\ldots j_l,k_1\\ldots k_l}=0$ for all $l\\neq q$ and choose the above Gaussian distribution for $c^{(l)}_{j_1\\ldots j_l,k_1\\ldots k_l}$. As for case (1a) and SYK$q$ for a fixed $q$, the many-body Hamiltonian may satisfy additional global symmetries so that subleading terms may deviate from our results. However, we expect that a mixture of $q$-body interactions should speed up the convergence to the leading-order result in the thermodynamic limit $V\\to\\infty$.\n\t\n\\subsection{Random quadratic Hamiltonians}\n\t\nCase (1b) for random pure fermionic Gaussian states is obtained from $\\hat{H}_\\text{1a}$ by setting all coefficients $c^{(l)}_{i_1, \\ldots, i_{l}} = 0$ whenever $l\\neq2$ in Eq.~\\eqref{H1a}; the resulting random quadratic Hamiltonian reads\n\\begin{align}\n\t\\hat{H}_\\text{1b}&=\\sum^{2V}_{i,j=1}c^{(2)}_{ij}\\,\\hat{\\xi}_i\\hat{\\xi}_j\\,,\n\\end{align}\nwith coefficients $c^{(2)}_{ij}$, drawn from a probability distribution that depends only on matrix invariants of $TC_{(2)}T^T$ with $C_{(2)}=\\{c^{(2)}_{ij}\\}_{i,j=1,\\ldots,2V}$, such as traces $\\operatorname{Tr} (TC_{(2)}T^T)^{2k}$. Then the invariance under ${\\rm O}(2V)$ is guaranteed, which is needed for the uniformly distributed pure fermionic Gaussian states that are the eigenvectors of this Hamiltonian. The Gaussian choice as the distribution of the coefficients $c^{(2)}_{ij}$ is equal to\n\\begin{equation}\n\t\\begin{split}\n\t\tP(\\hat{H}_\\text{1b})=&\\prod_{1\\leq j_1 0$, implying that the asymptotic entanglement entropy is approached from below as the system size increases.\n\t\n\\subsection{Quantum-chaotic quadratic model} \\label{sec:qchaoticquadratic}\n\t\nNext, we focus on a quadratic model, namely, a model whose Hamiltonian is bilinear in fermionic creation and annihilation operators. We explore how well the results for fermionic Gaussian states from Sec.~\\ref{sec:gaussian} predict the behavior of the entanglement entropy in eigenstates of a particle-number-conserving quadratic model that exhibits {\\it single-particle} quantum chaos. By single-particle quantum chaos we mean that the statistical properties of the single-particle energy spectrum are described by the Wigner-Dyson statistics of random matrix theory. Hence, we refer to this model as a quantum-chaotic quadratic model~\\cite{lydzba2021entanglement}. This is to be contrasted to the model in Sec.~\\ref{sec:qchaoticinteracting}, which exhibits {\\it many-body} quantum chaos, and to which we referred to as a quantum-chaotic interacting model.\n\t\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figure15}\n\t\\caption{Average entanglement entropy density $\\bar S\/[(V\/2)\\ln 2]$ in the 3D Anderson model~(\\ref{def_H_Anderson}) at $\\bar n=\\frac{1}{2}$. Main panel: plot of $\\bar S\/[(V\/2)\\ln 2]$ versus $f$ at disorder strength $W=1$, in a cubic lattice with $V=8000$ sites (symbols). The results are obtained averaging over 100 randomly selected many-body eigenstates and 10 Hamiltonian realizations. The solid line is the corresponding thermodynamic limit result for fermionic Gaussian states given by $\\langle S_A \\rangle_{{\\rm G},w=0}$ in Eq.~(\\ref{eq:expansion}). Inset: plot of $\\delta s_{{\\rm G},w=0} = (\\langle S_A \\rangle_{{\\rm G},w=0} - \\bar S)\/[(V\/2) \\ln 2]$ versus $1\/\\sqrt{V}$ at $f=\\frac{1}{2}$, for $W=1$ and 3, where $\\langle S_A \\rangle_{{\\rm G},w=0}$ corresponds to the fermionic Gaussian states [Eq.~(\\ref{eq:sum-chem-gauss})] at $w=0$ and the same $V$ as $\\bar S$. The results for $\\bar S$ are obtained averaging over $10^2$ to $10^4$ randomly selected many-body eigenstates and over 5 to 500 Hamiltonian realizations. Lines are linear fits $a_0 + a_1\/\\sqrt{V}$ to the results for $V \\geq 2000$. We get $a_0 = 2.4 \\times 10^{-4}$ and $a_1 = 0.03$ for $W=1$ (solid line), and $a_0 = 3.0 \\times 10^{-4}$ and $a_1 = 0.10$ for $W=3$ (dashed line). The numerical results for $\\bar S$ are from Ref.~\\cite{lydzba2021entanglement}.} \\label{fig:S_Anderson_scaling}\n\\end{figure}\n\t\nA well-known quadratic model that exhibits single-particle quantum chaos is the 3D Anderson model below the localization transition. The Hamiltonian of this model reads\n\\begin{equation} \\label{def_H_Anderson}\n\t\\hat H_{\\rm And} = -t \\sum_{\\langle i,j\\rangle} (\\hat f_i^\\dagger \\hat f^{}_j + \\hat f_j^\\dagger \\hat f^{}_i) + \\frac{W}{2}\\sum_i \\varepsilon_i \\hat n_i \\, , \n\\end{equation}\nwhere the first sum runs over nearest-neighbors sites on a cubic lattice. The operator $\\hat f_j^\\dagger$ ($\\hat f^{}_j$) creates (annihilates) a spinless fermion at site $j$, and $\\hat n_j = \\hat f_j^\\dagger \\hat f^{}_j$ is the site occupation operator. The operators $\\hat f_j^\\dagger$ and $\\hat f^{}_j$ satisfy the standard anticommutation relations $\\{\\hat{f}_l,\\hat{f}_k\\} = \\{\\hat{f}_l^\\dagger,\\hat{f}_k^\\dagger\\} = 0$ and $\\{\\hat{f}_l,\\hat{f}_k^\\dagger\\} = \\delta_{lk}$. The single-site occupation energies $\\varepsilon_i \\in [-1,1]$ are independently and identically distributed random numbers drawn from a box distribution. The 3D Anderson model exhibits a delocalization-localization transition at the critical disorder $W_c \\approx 16.5$ (see, {\\it e.g.},\\ Refs.~\\cite{kramer_mackinnon_93, markos_06, evers_mirlin_08, suntajs_prosen_21} for reviews). Our focus here is on disorder strengths well below this transition, $W \\ll W_c$. We stress that, when referring to single-particle quantum chaos in the context of the 3D Anderson model~\\eqref{def_H_Anderson}, we have in mind the fixed Hilbert space $\\mathcal{H}_1$ as the model of a single particle.\n\t\nEven though it has been known for decades that the single-particle spectral properties of the 3D Anderson model in the delocalized regime are well described by the Wigner-Dyson statistics~\\cite{altshuler_shklovskii_86, altshuler_zharekeshev_88, shklovskii_shapiro_93}, the entanglement entropy of energy eigenstates was studied only recently~\\cite{lydzba2021entanglement}. The latter study showed that the volume-law contribution of typical many-body eigenstates is accurately described by the volume-law term of the asymptotic expression in Eq.~(\\ref{eq:thermodynamic-limit}) for $n=\\frac{1}{2}$, which is the same as that in Eq.~(\\ref{eq:expansion}) for $\\bar n=\\frac{1}{2}$. This result suggests that the leading (volume-law) term in the eigenstate entanglement entropy of the 3D Anderson model deep in the delocalized regime is universal. In the main panel of Fig.~\\ref{fig:S_Anderson_scaling}, we plot the average eigenstate entanglement entropy density $\\bar S\/[(V\/2)\\ln 2]$ of randomly selected eigenstates as a function of the subsystem fraction $f$. The results show remarkable agreement with the corresponding thermodynamic limit expression for the weighted average entanglement entropy over fermionic Gaussian states $\\langle S_A\\rangle_{{\\rm G},w=0}$ in Eq.~(\\ref{eq:expansion}).\n\t\nIn spite of the latter agreement, we note that the average entanglement entropy over fermionic Gaussian states does not describe the first subleading term of the average entanglement entropy in the 3D Anderson model. As shown in the inset of Fig.~\\ref{fig:S_Anderson_scaling}, the first subleading term in the latter model scales $\\propto \\sqrt{V}$ at $f=\\frac{1}{2}$. No such term appears in $\\langle S_A\\rangle_{{\\rm G},w=0}$ in Eq.~(\\ref{eq:expansion}). The fact that, for the 3D Anderson model, the subleading $O(\\sqrt{V})$ term is not described by Eq.~(\\ref{eq:expansion}) is in stark contrast to what we found in Sec.~\\ref{sec:qchaoticinteracting} for a quantum-chaotic {\\it interacting} model. In the latter case, subleading terms that are $O(1)$ or greater in the physical model are properly described by the average $\\langle S_A\\rangle_N$ in Eq.~(\\ref{eq:Scenter}). Hence, the origin of the $O(\\sqrt{V})$ contribution to the entanglement entropy of eigenstates in the 3D Anderson model remains an open question. Such a contribution is not present in our analytical calculations of the averages over Gaussian states.\n\t\n\\subsection{Translationally invariant noninteracting fermions} \\label{sec:translational}\n\t\nNext, we consider a paradigmatic quadratic model that does not exhibit quantum chaos at the single-particle level. Namely, translationally invariant noninteracting fermions, for which the Hamiltonian is a sum of hopping terms over nearest-neighbor sites [the first term in Eq.~\\eqref{def_H_Anderson}]. For simplicity, we focus on the 1D case\n\\begin{equation} \\label{def_H_Tinvariant}\n\t\\hat H_\\text{T}^\\text{1D} = - \\sum_{i=1}^{V} \\left( \\hat f_i^\\dagger \\hat f^{}_{i+1} + \\hat f_{i+1}^\\dagger \\hat f^{}_{i} \\right) ,\n\\end{equation}\nwith periodic boundary conditions, $\\hat f^{}_{V+1} \\equiv \\hat f^{}_1$. The single-particle eigenenergies of the model in Eq.~(\\ref{def_H_Tinvariant}) are given by the well-known expression $\\epsilon_n = -2\\cos(2\\pi n\/V)$ with $n = 0, 1, ..., V-1$, which makes apparent that the statistical properties of the single-particle spectrum are not described by the Wigner-Dyson statistics.\n\t\nThe average eigenstate entanglement entropy of the model in Eq.~(\\ref{def_H_Tinvariant}) was studied in Ref.~\\cite{vidmar2017entanglement} (before the universal predictions for the quantum-chaotic quadratic models and the fermionic Gaussian states were derived). The numerical calculations in Ref.~\\cite{vidmar2017entanglement} were carried out by averaging the entanglement entropy over the full set of $2^V$ many-body eigenstates. Remarkably, the numerical results were found to converge rapidly to the thermodynamic limit result, as shown for the case of $f=\\frac{1}{2}$ in the inset of Fig.~\\ref{fig:S_Tinvariant_scaling}. Thanks to that scaling, we find the volume-law coefficient $s^\\infty_\\text{T}$ of the average entanglement entropy $\\bar S_\\text{T} = s^\\infty_\\text{T} V_A \\ln2$ at $f=\\frac{1}{2}$ to high numerical accuracy, $s^\\infty_\\text{T} = 0.5378(1)$, which is consistent with the result reported in Ref.~\\cite{vidmar2017entanglement}. This is to be contrasted to the volume-law coefficient $s^\\infty_{{\\rm G},w=0}$ of fermionic Gaussian states $\\langle S_A\\rangle_{{\\rm G},w=0} = s^\\infty_{{\\rm G},w=0} V_A \\ln2$ from Eq.~(\\ref{eq:expansion}), which yields $s^\\infty_{{\\rm G},w=0} = 0.5573$. We then see that $s^\\infty_\\text{T}$ and $s^\\infty_{{\\rm G},w=0}$ are close but different. The full curve for $S_\\text{T}$ as a function of $f$, for $V=36$, is shown in Fig.~\\ref{fig:S_Tinvariant_scaling} together with the full curve for $\\langle S_A\\rangle_{{\\rm G},w=0}$ from Eq.~(\\ref{eq:expansion}). They are clearly different and, given the abovementioned fast convergence of the numerical results with $V$, we expect the differences to remain in the thermodynamic limit. The exact analytical form of the $\\bar S_\\text{T}(f)$ curve for translationally invariant free fermions remains elusive, but tight bounds have already been calculated~\\cite{hackl2019average}.\n\t\n\\begin{figure}[!t]\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{figure16}\n\t\\caption{Average entanglement entropy density $\\bar S\/[(V\/2) \\ln 2]$ of translationally invariant noninteracting fermions in a one-dimensional lattice, described by the Hamiltonian in Eq.~(\\ref{def_H_Tinvariant}). Main panel: plot of $\\bar S\/[(V\/2) \\ln 2]$ versus $f$ in the lattice with $V=36$ sites. The results are obtained by averaging over all $2^V$ many-body eigenstates. The solid line is the corresponding thermodynamic limit result for fermionic Gaussian states given by $\\langle S_A \\rangle_{{\\rm G},w=0}$ in Eq.~\\eqref{eq:expansion}. Inset: plot of $\\delta s_{\\rm T} = (\\bar S_{\\rm T} - \\bar S)\/([V\/2] \\ln 2)$ versus $1\/V$ at $f=\\frac{1}{2}$, where $\\bar S_{\\rm T}\/([V\/2] \\ln 2) = 0.5378$. The solid line shows the function $a\/V^\\zeta$, with $a = 0.23$ and $\\zeta=1.96$. The numerical results for $\\bar S$ are from Ref.~\\cite{vidmar2017entanglement}.}\\label{fig:S_Tinvariant_scaling}\n\\end{figure}\n\t\nWe conclude by noting that, for the translationally invariant quantum-chaotic interacting model studied in Sec.~\\ref{sec:qchaoticinteracting}, the average eigenstate entanglement entropy is accurately described by the corresponding entanglement entropy of general pure states. The role of Hamiltonian symmetries in the average entanglement entropy of energy eigenstates in quantum-chaotic interacting and quantum-chaotic quadratic models remains an important question to be explored in future studies.\n\n\\begin{table*}[!t]\n\t\\renewcommand{\\arraystretch}{1.7}\n\t\\hspace*{-0.6cm}\\begin{center}\\begin{tabular}{l||ll|ll}\n\t\t\t& \\textbf{(a) General pure states} & & \\multicolumn{2}{l}{\\textbf{(b) Pure fermionic Gaussian states}} \\\\\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\shortstack[l]{(1) no\\\\ particle\\\\ number}} & $\\braket{S_A}=a V\\!-\\!b+O(2^{-V})$ \\& \\textbf{exact} & $\\rightarrow$ \\eqref{eq:Page-therm}, Fig.~\\ref{fig:Page-discon}, \\cite{page1993average} & $\\braket{S_A}_{\\mathrm{G}}=a V\\!+\\!b\\!+\\!O(\\frac{1}{V})$ \\& \\textbf{exact} & $\\rightarrow$ \\eqref{eq:Gaussian-average}, Fig.~\\ref{fig:Page-Gaussian}, \\cite{bianchi2021page}\\\\\n\t\t\t& $(\\Delta S_A)^2=\\alpha e^{-\\beta V}+o(e^{-\\beta V})$ & $\\rightarrow$ \\eqref{eq:variance-page}, \\cite{vivo_pato_16} & $(\\Delta S_A)^2_{\\mathrm{G}}=a+o(1)$ & $\\rightarrow$ \\eqref{eq:Gaussian-variance}, \\cite{bianchi2021page}\\\\\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\shortstack[l]{(2) fixed\\\\particle\\\\ number}} & $\\braket{S_A}_N=a V\\!-\\!b\\sqrt{V}\\!-\\!c\\!+\\!o(1)$ & $\\rightarrow$ \\eqref{eq:leading-general}, Fig.~\\ref{fig:general-N-visual} & $\\braket{S_A}_{\\mathrm{G},N}=aV\\!-\\!\\frac{b}{V}\\!+\\!O(\\frac{1}{V^2})$ \\& \\textbf{exact} & $\\rightarrow$ \\eqref{eq:thermodynamic-limit}\\\\\n\t\t\t& $(\\Delta S_A)^2_N=\\alpha\\, V^{\\frac{3}{2}}\\operatorname{e}^{-\\beta V}$ & $\\rightarrow$ \\eqref{eq:DeltaS-N} & $(\\Delta S_A)^2_{\\mathrm{G},N}=a\\!+\\!o(1)$ & $\\rightarrow$ \\eqref{eq:variance-Gaussian}, Fig.~\\ref{fig:variance}\\\\\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{\\shortstack[l]{(3) fixed\\\\ weight}} & $\\braket{S_A}_w=aV\\!+\\!b\\!+\\!c\\sqrt{V}\\!+\\!o(1)$ & $\\rightarrow$ \\eqref{eq:Page-weighted} & $\\braket{S_A}_{\\mathrm{G},w}=\\!a V\\!+\\!b\\!+\\!\\tfrac{c}{\\sqrt{V}}\\!+\\!\\tfrac{d}{V}\\!+\\!o(\\tfrac{1}{V})$ & $\\rightarrow$ \\eqref{eq:expansion}, Fig.~\\ref{fig:Gaussian-mu-visual}\\\\\n\t\t\t& $(\\Delta S_A)^2_w= a V+o(V)$ & $\\rightarrow$ \\eqref{eq:DeltaS-w} & $(\\Delta S_A)^2_{\\mathrm{G},w}=a V+o(V)$ & $\\rightarrow$ \\eqref{eq:variance-Gw}\n\t\\end{tabular}\\end{center}\n\t\\caption{Overview of the results discussed in this tutorial. We list the main results, indicate up to which order in $V$ we derived the respective expressions (and if there exists an exact formula), and where the respective formulas can be found (equations, figures, references). Most results for fixed particle number are new, but if special cases or the leading order term were already known before, we cite the relevant works after the equation in the main text.}\n\t\\label{tab:results}\n\\end{table*}\n\t\n\\section{SUMMARY AND OUTLOOK}\n\t\nIn this section, we briefly summarize the key results discussed in this tutorial, and give an outlook of where we envision the methods introduced to be applicable. We also mention some open questions in the context of the entanglement entropy of typical pure states.\n\t\n\\subsection{Summary}\n\t\nWe provided a pedagogical introduction to the current understanding of the behavior of the entanglement entropy of pure quantum states. We derived analytical expressions for the average entanglement entropy of general and Gaussian states, and considered states with and without a fixed number of particles. A comprehensive summary of the results discussed can be found in Table~\\ref{tab:results}, where we contrast results for: (1) arbitrary particle number, (2) fixed particle number $N$ and (3) fixed weight parameter $w$ for both (a) general pure states and (b) Gaussian states. This yields the six state ensembles (1a) through (3b).\n\t\nFor both Gaussian and general pure states, the leading-order behavior $\\braket{S_A}$ at half-filling $N=V\/2$ coincides with the full average without fixing the total particle number, while the next-to-leading-order terms differ. For general pure states, we confirmed an additional contribution proportional to $\\sqrt{V}$ at $f=\\frac{1}{2}$ in Eq.~\\eqref{eq:leading-general}, previously found in Ref.~\\cite{vidmar2017entanglement2}. For Gaussian states, we derived the exact formula, which does not contain such a term and has a next-to-leading-order term of order $1\/V$ [Eq.~\\eqref{eq:Gaussian-expansion}]. However, we did find a contribution of order $1\/\\sqrt{V}$ in the asymptotic average $\\braket{S_A}_{\\mathrm{G},w}$ at fixed $w$ with $f=\\bar{n}$, {\\it i.e.},\\ whenever the subsystem fraction $f$ equals the average filling ratio $\\bar{n}=\\braket{N\/V}=1\/(1+e^{w})$.\n\t\nWe traced back these contributions to the nonanalytic behavior of the average entanglement entropy as a function of the subsystem fraction $f$ and the filling ratio $n$. In the case of Gaussian states, we identified the additional particle-subsystem symmetry $n\\leftrightarrow f$, which is responsible for the $1\/\\sqrt{V}$ term. From a mathematical perspective, the origin of the $\\sqrt{V}$ term in $\\braket{S_A}_N$ is therefore the same as that of the $1\/\\sqrt{V}$ term in $\\braket{S_A}_{\\mathrm{G},w}$, namely, both calculations involve the average of a nonanalytic function with respect to an approximately Gaussian statistical distribution. Square root powers of $V$ appear exactly when the mean of the Gaussian lies in a neighborhood of the nonanalyticity, {\\it i.e.},\\ there is a jump in one of the function's derivatives.\n\t\nFinally, we connected the results obtained for the average entanglement entropy in the six ensembles of states mentioned before to the average entanglement entropy in eigenstates of specific random matrices and of physical Hamiltonians. Maybe the most surprising result in the context of quantum-chaotic interacting Hamiltonians is that not only does the leading term in the average agree with the corresponding ensemble average, but also subleading terms that are $O(1)$ or larger in the volume, {\\it e.g.},\\ $O(\\sqrt{V})$. Why this is so is a question that deserves to be further explored. Equally intriguing is to understand why the same is not true in the case of quantum-chaotic quadratic Hamiltonians.\n\t\n\\subsection{Outlook}\n\t\nLooking forward, an important question is how general are the methods and results discussed here. We focused on fermionic systems, for which we can compare general pure states with Gaussian pure states, and unveiled the effect of fixing the total particle number. Our results for general pure states apply equally to hard-core bosons and spin-$\\frac{1}{2}$ systems. In the latter, the total magnetization plays the role that the total particle number plays in fermionic and hard-core boson models.\n\t\n\\subsubsection{Typical eigenstate entanglement entropy as a diagnostic of quantum chaos and integrability}\n\t\nAs mentioned in the Introduction, a novel picture that the recent numerical studies such as those discussed in Sec.~\\ref{sec:relphysham} have started to consolidate is that typical many-body eigenstates of quantum-chaotic interacting Hamiltonians have similar entanglement properties as typical pure states in the Hilbert space. In parallel, typical many-body eigenstates of quantum-chaotic quadratic Hamiltonians have similar entanglement properties as typical Gaussian pure states. We quantified how similar they are by showing that typical eigenstates of a specific quantum-chaotic interacting Hamiltonian exhibit $O(1)$ and greater terms in the entanglement entropy that are the same than in typical pure states in the Hilbert space. For typical many-body eigenstates of quantum-chaotic quadratic Hamiltonians, we showed that the $O(V_A)$ term is the same as in typical Gaussian pure states. These statements (for $V_A=fV\\leq V\/2$) are true independently of whether one deals with states in which the number of particles is fixed or not.\n\t\nIn the context of Hamiltonians that do not exhibit many-body quantum chaos, namely, in which the many-body level spacing distributions are not described by the Wigner surmise~\\cite{d2016quantum}, we showed that typical many-body energy eigenstates of translationally invariant noninteracting fermions exhibit an $O(V_A)$ term that behaves qualitatively similar (but is not equal) to that obtained for typical Gaussian pure states, namely, the prefactor of such a term is a function of the subsystem fraction $f$. The same behavior was found in Ref.~\\cite{leblond_mallayya_19} for the typical entanglement entropy of many-body eigenstates of the integrable spin-$\\frac{1}{2}$ XXZ chain. This is fundamentally different from what happens in typical many-body eigenstates of quantum-chaotic interacting Hamiltonians, in which the prefactor is maximal (it depends only on the filling $n$) as in typical pure states.\n\t\nHence, as conjectured in Ref.~\\cite{leblond_mallayya_19}, the entanglement entropy of typical many-body energy eigenstates can be used to distinguish models that exhibit many-body quantum chaos (whose level spacing distributions are described by the Wigner surmise, and are expected to thermalize when taken far from equilibrium~\\cite{d2016quantum}) from those that do not. This is a welcome addition to the toolbox for identifying quantum chaos as it relies on the properties of the eigenstates as opposed to the properties of the eigenenergies. Other entanglement-based diagnostics of quantum chaos and integrability have been proposed in recent years, among them are the operator entanglement growth~\\cite{prosen07, alba19, alba21}; the diagonal entropy~\\cite{santos_11, rigol_16}, the mutual information scrambling~\\cite{alba19}, and entanglement revivals~\\cite{modak20} after quantum quenches; the tripartite operator mutual information~\\cite{hosur16, ryu21}; and the entanglement negativity between two subsystems in a tripartition of many-body energy eigenstates~\\cite{grover20}.\n\t\nIt is important to emphasize that an advantage of using the entanglement properties of energy eigenstates, instead of the properties of the eigenenergies, is that one does not need to resolve all the symmetries of the model nor does one need to do an unfolding of the spectrum, which are of paramount importance to identify quantum chaos using the eigenenergies as discussed in Sec.~\\ref{sec:localspec}. In addition, in comparison to some of the entanglement diagnostics that were mentioned above, one does not need to study dynamics. Further works are needed on interacting integrable models to establish whether the leading term of the entanglement entropy of typical many-body energy eigenstates is universal or not, and to understand the nature of the subleading terms. So far, results are available only for the integrable spin-$\\frac{1}{2}$ XXZ chain~\\cite{leblond_mallayya_19}.\n\t\n\\subsubsection{Beyond qubit-based systems}\n\t\nThe analytical tools introduced and explained in this tutorial can be used beyond the fermionic systems we studied (and beyond the spin-$\\frac{1}{2}$ and hard-core boson systems we mentioned), and facilitate the study of bosonic systems with a fixed particle number. To be concrete, a bosonic subsystem with $V_A$ out of $V$ bosonic modes and total particle number $N$ can be treated analogously to Eq.~\\eqref{eq:Scenter}, but with dimensions respecting the bosonic commutation statistics, {\\it i.e.},\\ \n\t\\begin{align}\\label{eq:boson-dim}\n\t\td_A(N_A)&=\\frac{(N_A+V_A-1)!}{N_A!(V_A-1)!}\\,,\\\\\n\t\td_B(N-N_A)&=\\frac{(N-N_A+V-V_A-1)}{(N-N_A)!(V-V_A-1)!}\\,,\\\\\n\t\td_N&=\\frac{(N+V-1)!}{N!(V-1)!}\\,,\n\t\\end{align}\nwhich follows from the combinatorics of sampling with replacement without caring about the order, {\\it e.g.},\\ for $d_A$, we ask how many ways there are to distribute $N_A$ indistinguishable particles over $V_A$ sites (where each site can hold arbitrarily many particles). Anew, it holds that ${\\sum}_{N_A=0}^N d_A(N_A)d_B(N-N_A)=d_N$.\n\t\nFollowing Page's approach, we again choose an arbitrary uniformly distributed random vector state in the Hilbert space $\\mathcal{H}_N$. Thus, the invariance of the state under the unitary group ${\\rm U}(d_N)$, now with a different dimension $d_N$, still applies. Therefore, we can follow the same strategy as in Sec.~\\ref{sec:page-fixedN}, in particular we can exploit Eq.~\\eqref{eq:Scenter} with dimensions~\\eqref{eq:boson-dim}. This yields, in the thermodynamic limit with fixed $f\\in(0,\\frac{1}{2})$ and $n\\in(0,\\infty)$,\\footnote{We evaluate Eq.~\\eqref{eq:average-int}, where $\\varrho(n_A)$ and $\\varphi(n_A)$ slightly change from expanding Eq.~\\eqref{eq:boson-dim} via a saddle point approximation. This yields the normal distribution $\\varrho(n_A)$, with mean $\\bar{n}_A=fn$ and variance $\\sigma^2=(1-f)f(1+n)n\/V$, and $\\varphi(n_A)$ in Eq.~\\eqref{eq:psi} becomes\n\\begin{align*}\n\t\\begin{split}\n\t\t\\varphi(n_A)&=[n_A\\ln(n_A)+f\\ln(f)+n\\ln[(1+n)\/n]\\\\\n\t\t&\\quad+\\ln(1+n)-(f+n_A)\\ln(f+n_A)]V\\\\\n\t\t&\\quad+ \\tfrac{1}{2}\\ln\\left(\\tfrac{n_A (f+n_A)}{f(1+n)n}\\right)-\\tfrac{1}{2}\\delta_{f,\\tfrac{1}{2}}\\delta_{n_A,n\/2}+o(1)\n\t\\end{split}\n\\end{align*}\nfor $n_A\\geq n_{\\rm crit}$ with $n_{\\rm crit}=N_{\\rm crit}$ again given by $d_A(N_{\\rm crit})=d_B(N-N_{\\rm crit})$. For $n_A\\leq n_{\\rm crit}$ one needs to apply the symmetry $(n_A,f)\\leftrightarrow(n-n_A,1-f)$. The summand at $N_A=N\/2$ reflected by the term $\\delta_{n_A,n\/2}$ has to be taken as it is and is not integrated. Nevertheless, one can check numerically that it yields a term of order $1\/\\sqrt{V}$ and is thus subleading in Eq.~\\eqref{eq:bosonic-results}.}\n\\begin{align}\n\t\\begin{split}\\label{eq:bosonic-results}\n\t\t\\braket{S_A}_{\\mathrm{bos},N}&=fV[n\\ln(1+n^{-1})+\\ln(1+n)]\\\\\n\t\t&\\quad+\\sqrt{V}\\sqrt{\\frac{n+n^2}{8\\pi}}\\ln(1+n^{-1})\\,\\delta_{f,\\frac{1}{2}}\\\\\n\t\t&\\quad+\\frac{f+\\ln(1-f)}{2}+o(1),\n\t\\end{split}\\\\\n\t\\braket{S_A}_{\\mathrm{bos},w}&=\\braket{S_A}_{\\mathrm{bos},N=\\bar{n}V}-\\frac{f}{2}+o(1)\\,,\n\\end{align}\nwhere the weighted average is only meaningful for $w>0$, for which $\\bar{n}=1\/(e^w-1)$. Note that there is no particle-hole symmetry for bosons, and that $n=N\/V$ can be arbitrarily large.\n\t\nOther natural generalizations are spin-$s$ systems with $s>\\frac{1}{2}$ and systems consisting of distinguishable particles. These cases can also be studied using the methods discussed in this tutorial, after carrying out the respective combinatorics of the Hilbert space dimensions $d_A$ and $d_B$. Also, systems with global symmetries such as time-reversal invariance or chirality can be considered, which have an impact on the respective symmetry group so that the Hilbert space is not invariant anymore under ${\\rm U}(d_N)$ but only under ${\\rm O}(d_N)$ or ${\\rm U}(d_{N_1})\\times{\\rm U}(d_{N_2})$. The leading terms are expected to be the same, as the respective random matrix ensembles share the same level densities. Deviations are expected to occur in subleading terms.\n\t\n\\subsubsection{Other ensembles and entanglement measures}\n\t\nWe focused on ensembles of states, general and Gaussian pure states for arbitrary and fixed particle numbers, which mirror the entanglement properties of typical (``infinite-temperature'') eigenstates of physical lattice models. It is also possible to construct ensembles of pure states in which one fixes the energy, which mirror the entanglement properties of ``finite-temperature'' eigenstates of physical lattice models. Steps in this direction have already been taken using different tools; see, {\\it e.g.},\\ Refs.~\\cite{Deutsch_2010, nakagawa_watanabe_18, Fujita:2018wtr, lu_grover_19, murthy_19, bianchi2019typical}. In the context the scaling of the eigenstate entanglement entropy at different energy densities (``temperatures''), let us also emphasize that all the average entanglement entropies computed in this tutorial exhibited a leading volume-law term, namely, the leading term in the average entropies scales with the number of modes $V$ and is thus agnostic to the individual shape or area of the subsystem. In contrast, as discussed in the Introduction, it is well known that low-energy states of many physical systems of interest exhibit a leading area law term. An important open question is whether one can define ensembles of pure states that exhibit leading terms in the entanglement entropy that are area law.\n\t\nInstead of considering the von Neumann entanglement entropy, one can also consider other quantities that are defined with respect to the invariant spectrum of the reduced density operator $\\hat\\rho_A=\\mathrm{Tr}_{\\mathcal{H}_B}\\ket{\\psi}\\bra{\\psi}$ of a pure state $\\ket{\\psi}$. Such quantities include the well-known Renyi entropies $S^{(n)}_A(\\ket{\\psi})$, and the eigenstate capacity~\\cite{de2019aspects}. We focused on the von Neumann entropy, as it is arguably the most prominent measure of bipartite entanglement. Nonetheless, we expect that our findings can also be extended to the aforementioned quantities; see, {\\it e.g.},\\ Refs.~\\cite{liu_chen_18, pengfei_chunxiao_20, lydzba2021entanglement, ulcakar_vidmar_22} for studies of Renyi entropies and Refs.~\\cite{bhattacharjee2021eigenstate, huang2021second} for studies of the eigenstate capacity.\n\t\nIt would also be interesting to explore multipartite entanglement measures for different ensembles of pure states. This will likely require new techniques, and it is not clear what the most suitable measure is. The latter question is the subject of ongoing research.\n\t\n\\section*{Acknowledgments}\nWe would like to thank Pietro Don\\`a, Peter Forrester, Patrycja \\L yd\\.{z}ba, Lorenzo Piroli, and Nicholas Witte for inspiring discussions. E.B.~acknowledges support from the National Science Foundation, Grant No.~PHY-1806428, and from the John Templeton Foundation via the ID 61466 grant, as part of the \"Quantum Information Structure of Spacetime (QISS)\" project (\\hyperlink{http:\/\/www.qiss.fr}{qiss.fr}). L.H.~gratefully acknowledges support from the Alexander von Humboldt Foundation. M.K.~acknowledges support from the Australian Research Council (ARC) under grant No.~DP210102887. M.R.~acknowledges support from the National Science Foundation under Grant No.~2012145. L.V.~acknowledges support from the Slovenian Research Agency (ARRS), Research core fundings Grants No.~P1-0044 and No.~J1-1696. L.H.~and M.K.~are also grateful to the MATRIX Institute in Creswick for hosting the online research programme and workshop ``Structured Random Matrices Downunder'' (26 July\u201313 August 2021).\n\t\n\\onecolumngrid\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe goal of this presentation at the Hot Quarks 2006 workshop was to attempt to develop a consistent understanding of the \nterm ``sQGP'' and the physics conclusions that result. The first step in achieving such a\ngoal is to detail what the letter ``s'' actually stands for and what is means. \nDoes the terminology change from quark gluon plasma (QGP) to sQGP alphabetically symbolize an\nimportant paradigm shift in the understanding of high temperature nuclear matter?\n\nFirst, we detail what various people and collaborations have stated that ``sQGP'' means.\nM. Gyulassy explained: \n``The name 'sQGP' (for strongly interacting Quark Gluon Plasma) helps to distinguish that matter from ordinary hadronic resonance matter (as described for example by RQMD) and\nalso from the original 1975 asymptotically free QGP (which I dubbed wQGP) that is now theoretically defined\nin terms of re-summed thermal QCD~\\cite{newdirections}.'' \nGyulassy and McLerran~\\cite{gyulassymclerran} have argued \n``Our criteria for the discovery of QGP are (1) Matter at energy densities so large that simple\ndegrees of freedom are quarks and gluons. This energy density is that predicted from lattice gauge theory for\nthe existence of a QGP in thermal systems, and is about 2 $GeV\/fm^3$, (2) The matter must be to a good approximation\nthermalized, (3) The properties of the matter associated with the matter while it is hot and dense must follow\nQCD computations based on hydrodynamics, lattice gauge theory results, and perturbative QCD for hard processes \nsuch as jets. All of the above are satisfied from the published data at RHIC... This leads us to conclude that the \nmatter produced at RHIC is a strongly coupled QGP (sQGP) contrary to original expectations that were based on \nweakly coupled plasma estimates.''\n\n\\begin{figure*}\n\\begin{center}\n\\resizebox{0.6\\textwidth}{!}{%\n \\includegraphics{figure_v2compilation.eps}\n}\n\\caption{Azimuthal anisotropy ($v_2$) as a function of $p_T$ from minimum bias gold-gold collisions. Hydrodynamic calculations \nare shown as dashed lines.}\n\\label{fig:1} \n\\end{center}\n\\end{figure*}\n\nAlthough the estimates of the energy density at early times ($t=1~fm\/c$) utilizing various methods disagree by\nmore than a factor of two~\\cite{PHENIX_whitepaper}, \nall values are significantly above that predicted for the QGP phase transition for the first few $fm\/c$. For\nexample, the value from the Bjorken energy density equation is up to a factor of four lower than from hydrodynamic\ncalculations, but the Bjorken value is often viewed as a lower limit since it ignores any effects from \nlongitudinal work. Thus, the first criteria seems to be met. Agreement of hydrodynamic calculations and\nexperimental data on transverse momentum spectra and in particular elliptic flow $v_2$ \n(see Figure~\\ref{fig:1}~\\cite{PHENIX_whitepaper,STARflow}) \nindicate very rapid equilibration times of order $t \\approx 1~fm\/c$~\\cite{heinz}. There have been questions\nraised about the required degree of thermalization~\\cite{borghini}; and, \nthe originally stated agreement of hydrodynamics with the lattice equation of state (EOS) appears to\nbe overstated so that no quantitative constraint on latent heat or softness is yet warranted~\\cite{PHENIX_whitepaper,pasi}.\nHowever, it does appear that equilibration is approached more substantially than one might have expected \nfrom perturbative calculations (see later discussion\non this point). Thus the first two criteria listed in ~\\cite{gyulassymclerran} appear satisfied and \nmight allow one to scientifically conclude that RHIC collisions have\ncreated the QGP. However, it is the critical third point that defines the experimental discovery of such. \n\n\\section{Strongly interacting versus strongly coupled}\n\nIn the literature there is a mixture of terminology from strongly interacting and strongly coupled. If it is strongly coupled, \nwhich coupling is being referred to? In many talks and publications, the ``strongly coupled'' refers to the \nplasma coupling parameter $\\Gamma$ (often used in the case of electromagnetic EM plasmas). \n\n\\subsection{Plasma Coupling $\\Gamma$}\n\nThis couping is defined as $\\Gamma = \/ $, where PE is the average potential \nenergy and KE is the average kinetic energy. This parameter is used as a measure of the interaction strength in\nEM plasmas. Most EM plasmas that people are familiar with are weakly coupled plasmas where $\\Gamma << 1$. These \nbehave like gases. However, for $\\Gamma >> 1$ the EM plasmas are strongly coupled and behave as low viscosity liquids and \nas solids at even larger $\\Gamma$, as shown in Figure~\\ref{fig:2}~\\cite{ichimaru}.\n\n\\begin{figure*}\n\\begin{center}\n\\resizebox{0.6\\textwidth}{!}{%\n \\includegraphics{figure_ichimaru.eps}\n}\n\\caption{Plotted is the scaled shear viscosity ($\\eta^{*} = \\eta\/mn\\omega_{p}a^{2}$) as a function of $\\Gamma$ for\nsupercooled OCP fluids.}\n\\label{fig:2} \n\\end{center}\n\\end{figure*}\n\nSince EM plasmas have been widely studied, it is natural to seek to categorize the quark gluon plasma (QGP) in a similar fashion.\nRecently at RHIC, there has been significant\npublication on the QGP as a ``near-perfect liquid.'' Thus a question from someone outside the field of heavy ions is whether the\nmatter is in the plasma phase or liquid phase (often thought to be different regimes in the EM matter case). \nOne must be careful about two different definitions of liquid being used here. \nLiquid can refer to a specific phase of electromagnetic matter and secondly where \nliquid refers to any matter whose dynamic evolution can be described by hydrodynamic equations of motion.\nAn EM plasma in the strong coupling\n(large $\\Gamma$ regime) is a plasma in that the electric charges are not confined to atoms, but has the liquid like property (second definition) of \nlow viscosity. \nAt RHIC, the matter produced shows some evidence of low viscosity (though not quantitative yet in terms of an upper limit on the shear viscosity). \nThus, it may be a liquid (by the second definition), but may not share other EM liquid phase (first definition) properties. \nFor example, many electromagnetic liquids are also highly incompressible. For\nthe QGP, at baryon chemical potential $\\mu_{B} = 0$ the pressure (P) and volume (V) are independent. Again, the matter shares a property, but\nnot all.\n\nThese analogies are often useful, but only if they lead to new insights, rather than just new declarations and new terminology.\nOne has to be careful to define which properties are analogous. For example, QCD always has screening of long range color magnetic\nfields which means even a weakly interacting (asymptotically free) QGP will be quite different from a weakly coupled EM plasma. Also,\non short distance scales, color electric and magnetic fields can be of equal order. \n\nSome in the field have argued the following logic: Since the matter produced at RHIC has a large $\\Gamma$ value, it must be a plasma\n(as a phase). This leads to the very strong conclusion that the matter at RHIC is a plasma (meaning a deconfined plasma of quarks\nand gluons). However, though EM plasmas are categorized in terms of $\\Gamma$, not all large $\\Gamma$ (i.e. low viscosity) matter\nis a plasma at all.\nAs an example, there have been recent experiments with Lithium atoms where the mean free paths approach\nzero under certain conditions~\\cite{lithium}. The Feshbach resonance in binary collisions of these alkali atoms at ultra-cold\ntemperatures allow experimentalists to tune the interaction strength. The measurements reveal low viscosity and\n``flow'' reminiscent of that seen in RHIC collisions. However, these atoms are clearly not an EM plasma. \nThus, at RHIC, demonstrating low viscosity does not \nprove the matter is a plasma.\n\nOne can push the plasma analogy and attempt to estimate the value of the $\\Gamma$ parameter for the QGP and then \nattempt to infer other properties of the medium. One such estimate~\\cite{thoma} yields:\n\\begin{equation}\n\\Gamma = {{} \\over {}} \\approx {{\\alpha_{s}\/r} \\over {3T}} \\approx {{\\alpha_{s}T} \\over {3T}} \\approx \\alpha_{s}\n\\end{equation}\nthen utilizing the relation $\\alpha_s = g^{2}(T)\/4\\pi$ and putting back in $d$ the characteristic inter-particle distance, one obtains:\n\\begin{equation}\n\\Gamma = {{Cg^{2}} \\over {4 \\pi d T}} \\approx 1.5-5\n\\end{equation}\nNote that this result is different from an earlier much larger estimate which had a factor of $4\\pi$ unit error and was without\na factor of two scale-up for the approximately equal strength color magnetic interaction~\\cite{thoma}. Thoma notes that for EM plasmas ``the \nphase transition to the gas phase, assumed to happen at $\\Gamma_c \\approx 1$, takes place now at a few times the\ntransition temperature [from the QGP liquid to the QGP gas~\\cite{thoma}.'' Note the title of this\narticle is ``The Quark-Gluon Plasma Liquid.'' \nIn the PHENIX whitepaper it states\n``considerations such as these have led some to denote QGP in this regime as 'sQGP' for strongly interacting QGP~\\cite{PHENIX_whitepaper}.''\n\nIn a recent set of papers~\\cite{shuryak_cqgp}, the authors invoke a model referred to as cQGP where they calculate the shear viscosity as a function\nof the dimensionless $\\Gamma$ parameter. The calculation seems to show a QGP with liquid like behavior (low viscosity) at large $\\Gamma$\nand an indication of solid behavior at even larger $\\Gamma$, as was seen in the EM plasma case. There has been speculation that\nthe QGP formed in heavy ion collisions could have crystalline or polymer chain type solid structures~\\cite{shuryak_qm05}. However, it\nis critical to note that the letter 'c' stands for classical. Thus, the entire calculation is done in the non-relativistic, non-quantum\nregime and thus the possible insights gained have to be viewed with skepticism. \n\nThe entire utilization of $\\Gamma$ raises some significant questions. The potential energy is taken as the Coulomb (short range) part\nof the QCD potential as $\\alpha_{s}\/r$. Unfortunately, when one has a system of (nearly) massless, relativistic particles then the\npotential energy is not a well defined concept in a relativistic Quantum Field Theory (QFT). This issue applies to a QFT for QED or QCD, but\nis of particular concern for the QGP case here since anywhere near the transition temperature the light quarks are relativistic. \nThe fundamental problem is that there is no unique distinction between the \nparticles and the fields and thus no unique manner of separating potential energy and kinetic energy. In which category do the\ngluons belong for example? In the case of heavy quarks, one might approximate them as static source charges and thus have a reasonable\nattempt at separating the potential energy. However, this is not the case for the QGP overall, and the assumption of a non-relativistic\nlimit in the cQGP case discussion above is not close to the real case for the QGP even near the critical temperature T = 170~MeV.\nThere are attempts to formulate an alternative for calculating $\\Gamma$~\\cite{jacak}.\n\nMany people are interested in the $\\Gamma$ calculation since it is how many EM plasmas are categorized. However, other perfectly well-defined\nin hydrodynamics and in a QFT measures of the interaction strength do exist that can alternatively be used.\n\n\\subsection{Shear Viscosity over Entropy Density $\\eta\/s$}\n\n\\begin{figure*}\n\\begin{center}\n\\resizebox{0.6\\textwidth}{!}{%\n\\includegraphics{graph-He-N-H20.eps}\n}\n\\caption{Plotted are the shear viscosity to entropy density ratios ($\\eta\/s$) divided by\nthe conjectured lower bound as a function of temperature in Kelvin. Shown are curves for\nhelium, nitrogen and water.}\n\\label{fig:3} \n\\end{center}\n\\end{figure*}\n\nThere is a well defined measure of the interaction strength. It is the ratio of the shear viscosity (a measure of the\nmean free path of particles) and its entropy density (measure of the inter particle distances). It is in fact this ratio \n$\\eta \/ s$ that may be very small in the QGP as inferred from hydrodynamic calculations and their comparison to experimental data.\nRecent measurements of charm quark suppression at moderate $p_T ~\\approx 2-5~GeV\/c$ and non-zero elliptic flow $v_{2}$, may give\nthe best constraint on the diffusion coefficient from heavy quarks and subsequently $\\eta\/s$~\\cite{mooreteaney,naglevienna}. Full three-dimensional viscous\nhydrodynamic calculations in comparison with precision data are needed to set a quantitatively reliable limit on $\\eta\/s$. \nLattice simulations are presently unable to make reliable predictions of most dynamical properties of the quark-gluon\nplasma. The calculation of phenomenologically relevant transport properties, \nsuch as the shear viscosity or collective modes, remains an important \nchallenge \\cite{Petreczky:2005zy}.\n\nHowever, recently there has been important progress in calculating \nthese dynamical properties perturbatively in a dual quantum field theory \ninvolving black holes in anti-de Sitter (AdS) space \\cite{Kovtun:2004de}. \nThis approach is based on the insight derived from string theory that \nweakly coupled gravity theories in higher dimensions can be dual to \nfour-dimensional gauge theories in the strong coupling limit \\cite{Maldacena:1997re}. It must\nbe emphasized that these AdS\/CFT (conformal field theory) techniques \npresently have the limitation that no higher dimensional gravity or \nstring theory is known which is dual to QCD. Work by Son {\\it et al.} indicate\nthat there may be a lower viscosity bound $\\eta\/s > 1\/4\\pi$ applicable\nfor all systems including the quark gluon plasma. A critical goal for\nthe field is to put the QCD matter data point on a plot like the one shown\nin Figure~\\ref{fig:3} for other systems~\\cite{Kovtun:2004de}. \n\nAn interesting side note is that in the figure these systems have a minimum\nin the ratio $\\eta\/s$. In fact, for helium, super-fluidity sets in at approximately\n2 Kelvin, which is below the minimum. The minimum occurs around 4 Kelvin which\nis the gas to liquid phase transition point. Thus the minimum is not a minimum\nin viscosity, but rather the sudden change in entropy associated with the phase\ntransition. Note the recent paper on the subject~\\cite{larry}.\n\nThe most common example of a very low viscosity (or near perfect) fluid are the cases\nshown in Figure~\\ref{fig:3} which are referred to as super-fluids. In most cases this\nsuper-fluidity comes about from quantum mechanical effects dealing with the limited \nexcitations at low temperature. This seems quite different from the system at RHIC and\nthus though there are many examples in the literature describing the matter at RHIC as a \nnear perfect fluid, it is not termed a super-fluid.\n\n\\subsection{Strong Coupling $\\alpha_s$}\n\n\\begin{figure*}\n\\begin{center}\n\\resizebox{0.6\\textwidth}{!}{%\n \\includegraphics{v2paper_mbv2a.eps}\n}\n\\caption{Impact parameter averaged gluon elliptic flow as a function\nof $p_T$ for Au+Au reactions at $\\sqrt{s_{NN}}=130~GeV$ from MPC with various\nvalues of the transport opacity for b=0. Also shown are data points\nfrom the STAR experiment.}\n\\label{fig:4} \n\\end{center}\n\\end{figure*}\n\nAnother interpretation of the letter ``s'' is strongly coupled in the sense of\na large QCD coupling $\\alpha_s$. Clearly $\\alpha_s$ is always, in any experimentally\naccessible energy range, much greater than $\\alpha_{EM} = 1\/137$. The wQGP, where \nthe letter ``w'' stands for weak coupling, implies that perturbative expansions should\nconverge as $\\alpha_s << 1$. By contrast, sQGP would simply imply that perturbative \ntechniques would not be applicable. U. Heinz observed that \n``perturbative mechanisms seem unable to explain the phenomenological required\nvery short thermalization time scale, pointing to strong non-perturbative dynamics\nin the QGP even at or above $2 \\times T_c$.''~\\cite{uli}.\n\nIn specific, analytic calculations utilizing perturbative expansions of\ngluon scattering lead to long equilibration times ($ > 2.6 fm\/c$) and thus rather modest\nelliptic flow (i.e. small $v_2$)~\\cite{baier}. There are also numerical simulations that give similar \nresults utilizing a $2 \\rightarrow 2$ cross section of approximately 3 mb, as shown in Figure~\\ref{fig:4}~\\cite{molnar}.\nOne can artificially increase the cross section (or transport opacity) to match the data and it requires an order of\nmagnitude increase in the cross section. In this sense, it is not a wQGP. There are two important\ncaveats on these calculations. One is that the Equation of State is too hard relative to lattice\nresults for the QGP. More importantly is that there is some controversy over the inclusion of $2 \\rightarrow 3$ and $3 \\rightarrow 2$\nprocesses. Z.Xu {\\it et al}~\\cite{zhu} claim that their inclusion results in a dramatic decrease in the equilibration time\nand thus a large increase in $v_2$. At this conference it became clear that the critical part of\ntheir result is that in $2 \\rightarrow 3$ that the resulting gluons are emitted isotropically. Under\nthis assumption it is easy to see why it leads to rapid isotropization. Other implementations of these\nprocesses show much smaller effects, in large part due to forward peaking of the emission\ndistribution. This issue needs to be resolved.\n\nIn the third category used by Gyulassy and McLerran for discovery of the QGP, they cite utilizing \nperturbative methods to understand jet probes.\nRadiative energy loss calculations are done perturbatively to describe the jet quenching phenomena. In\nfact, the calculations are effectively leading order. GLV~\\cite{glv}, for example, assumes the correct pQCD interaction\nstrength (noting that some calculations use a fixed couping $\\alpha_s$ and others running), and then determine the color charge\ndensity. One obtains a result for $dN\/dy({\\rm gluons}) = 1000$ or $dN\/dy({\\rm quarks,gluons}) = 2000$. The final entropy density\ndS\/dy is of order 5000, and thus since the entropy cannot be larger at earlier times it translates roughly into a \nlimit $dN\/dy({\\rm quarks,gluons}) < 1300$~\\cite{muller_annualreview}. \nOne possibility is that more than just radiative energy loss contributes as has been highlighted by recent heavy quark results (perhaps indicating collisional energy loss). However, another\napproach is to say you know the color charge density and can then infer the coupling strength. \nThis then implies that the coupling strength is much larger than predicted from the effectively leading\norder perturbative calculation - which may be consistent with the sQGP description. \n\n\\subsection{Bound States}\n\nThis strong coupling $\\alpha_s$ is taken by Shuryak and collaborators~\\cite{shuryak_bound} to imply that\nthe interaction between quasi-particles is strong enough to bind them. Thus the sQGP\nis composed of bound (not necessarily color neutral) $qq$, $q\\overline{q}$, $gg$, $qg$, \netc. states. \nHowever, recent lattice calculations for Baryon number - Electric charge correlations show no\nsuch quasi-particles with these quantum numbers~\\cite{karsch}. It appears that lattice QCD is\nruling out $qq$ and $q\\overline{q}$ states, though the results can say nothing about states without\nthese quantum numbers like $qg$ and $gg$ states. \n\n\\subsection{Expectations}\n\nA reasonable question is why there was an original expectation for a wQGP or perturbative plasma. \n``For plasma conditions realistically obtainable in nuclear collisions ($T \\approx 250~MeV$, g = $\\sqrt{4\\pi\\alpha_s}$)\nthe effective gluon mass $mg^{*} \\approx 300~$MeV. We must conclude, therefore, that the notion of\nalmost free gluons (and quarks) in the high temperature phase of QCD is quite far from the truth. Certainly \none has $mg^{*} << T$ when $g<<1$, but this condition is never really satisfied in QCD, because\n$g \\approx 1\/2$ even at the Planck scale ($10^{19}$~GeV).''~\\cite{bmueller}.\nDespite this observation, many noted that from lattice gauge theory results the value of\n$\\epsilon\/T^{4}$ approaches 80\\% of the non-interacting gas limit. \nSome viewed this as\nindicating only weak interactions, while some in the \nlattice community already thought that this 20\\% difference from the Stefan Boltzmann limit \nwas the effect of strong residual interactions in a non-perturbative system.\nAlso, recent results from AdS\/CFT have\nshown that one can be at the 80\\% limit and still be in the very strongly interacting limit.\n\n\\section{Summary}\n\nExciting results of emergent phenomena at RHIC such as strong flow and jet quenching have sparked a great deal\nof very positive new thinking about the medium created in these collisions. It appears to represent a paradigm shift, \nalthough the earlier paradigm of a perturbatively describable (asymptotically free) plasma seems to have been poorly \nmotivated.\nF. Karsch puts it best: ``I do not really care what the 's' in sQGP means. However, I am worried and partly also disappointed about\nthe way this new name is used. The disappointment, of course, arises from the fact that suddenly a new name\nseems to be necessary to describe the properties of QCD in a temperature regime which lattice gauge theory since\na long time have identified as 'not being an ideal gas' and 'impossible to be described by perturbation theory~\\cite{newdirections}.''\n\nAs the field of heavy ions progresses, a coherent picture of the medium created may be emerging. At this point there\nare many ideas, some commensurate and other incommensurate with each other. \nHopefully the future \nwill tell us which are correct.\n\n\n\\section{Acknowledgment}\n\nWe thank the workshop\norganizers for providing an environment for stimulating discussion and new ideas from young people. We also acknowledge useful discussions prior to this workshop at the Boulder Workshop 2 and useful comments by one anonymous referee. We acknowledge support from the United States Department of Energy grant DE-FG02-00ER41152. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOne of the most important physical phenomena studied in condensed matter systems is the transport of electrons, especially when they are restricted to move in one dimension. This is because of the unique nature of the inter-particle interactions in one dimension which leads to interesting physics which is substantially different from that of the higher dimensions where interactions are tackled conveniently using the Fermi liquid theory. Secondly the emergence of advanced technologies has made the realization of one dimensional systems possible that have unusual properties and hold a promising future - carbon nanotubes \\cite{bockrath1999luttinger}, semiconducting quantum wire \\cite{auslaender2000experimental, yacoby1997magneto} and so on. The suitable alternative to the Fermi liquid theory to capture the many body physics of such 1D systems is the Luttinger liquid theory \\cite{haldane1981luttinger} which has served as the paradigm for one dimensional systems and is based on linearization of the dispersion relations of the constituent particles near the Fermi level. \n\nMost of the physical phenomena of such systems can be systematically studied provided one has analytical forms of the correlation functions - to obtain these is the stated goal in quantum many body physics. In one dimension, this goal is achieved using bosonization methods where a fermion field operator is expressed as the exponential of a bosonic field \\cite{von1998bosonization}. This operator approach to bosonization, which goes under the name g-ology \\cite{giamarchi2004quantum}, can be used successfully to compute the N-point Green functions of a clean Luttinger liquid. But the Fermi-Bose correspondence used in the g-ology methods is insufficient to tackle impurities and to circumvent this, other techniques like renormalization group (RG) methods are mandatory \\cite{matveev1993tunneling}.\n\n\nA novel technique by the name of `Non chiral bosonization technique' has been developed that uses a basis different from the plane wave basis to deal strongly inhomogeneous Luttinger liquid, without adhering to RG methods \\cite{das2018quantum}. NCBT can extract the most singular part of the correlation functions of a Luttinger liquid with arbitrary strength of the external impurities as well as that of mutual interactions between the particles. It has also been applied successfully to study the one step fermionic ladder (two 1D wires placed parallel and close to each other with hopping between a pair of opposing points) \\cite{das2017one} and slowly moving heavy impurities in a Luttinger liquid \\cite{das2018ponderous}. The Green functions enables one to predict different physical phenomena occurring in the system such as Friedel oscillations \\cite{Egger1995friedel1}, conductance \\cite{fendley1995exact, fendley1995exact2}, Kondo effect \\cite{furusaki1994kondo, schiller1995exact}, resonant tunneling \\cite{kane1992resonant, furusaki1993resonant}, etc. \n\nIn the seminal work by Kane and Fisher \\cite{kane1992transport}, it has been shown how impurities can bring drastic effects to the conductance of the particles which can be as severe as `cutting the chain' by even a small scatterer. Since then the study of transport phenomena in a Luttinger liquid with impurities has interested a number of researchers \\cite{giamarchi1992conductivity, ogata1994collapse, safi1997conductance, ponomarenko1995renormalization}. The conductance of a narrow quantum wire with non-interacting electrons moving ballistically is given by $e^2\/h$. This conductance is renormalized for a Luttinger liquid and is given by $g e^2$\/h, where g is the Luttinger liquid parameter which depends on the mutual interaction strength of the particles \\cite{kane1992transport, apel1982combined, ogata1994collapse}. But no renormalization of the universal conductance is required if the electrons have a free behavior in the source and drain reservoirs \\cite{ponomarenko1995renormalization, maslov1995landauer}. Matveev et al. used a simple renormalization group method to calculate the conductance of a weakly interacting electron gas in presence of a single scatterer \\cite{matveev1993tunneling}. Ogata and Anderson \\cite{ogata1993transport} used Green's functions to study conductivity of a Luttinger liquid and showed that if the spin-charge separation is taken into account, the resistivity has a linear temperature dependence. Besides conductance, resonant tunneling is yet another important phenomena studied in Luttinger liquid with double barriers \\cite{kane1992resonant, kane1992transmission, furusaki1993resonant, moon1993resonant}. Kane and Fisher studied resonant tunneling in a single channel interacting electron gas through a double barrier and found that the width of the resonance vanishes, as a power of temperature, in the zero-temperature limit \\cite {kane1992resonant, kane1992transmission}. Furusaki and Nagaosa studied the same for spinless fermions and calculated the conductance as a function of temperature and gate voltage \\cite{furusaki1993resonant}. In another work, Furusaki studied resonant tunneling in a quantum dot weakly coupled to Luttinger liquids \\cite{furusaki1998resonant} and a few years later, this model was supported by experimental evidences \\cite{auslaender2000experimental}.\n\n\nIn this work, the conductance of a Luttinger liquid in presence of a cluster of impurity is calculated both in the Kubo formalism as well as the outcome of a tunneling experiment using the correlation functions obtained using NCBT. All the necessary limiting cases like Launderer's formula, conductance of a clean Luttinger liquid, half-line, etc. are all obtained. From the tunneling conductance the well known concepts of `cutting the chain' and `healing the chain' are elucidated. The condition of resonant tunneling for a double impurity system is obtained and the behavior of the correlation function exponents near its vicinity is elucidated.\n\n\\section{System description}\n\nThe system under study consists of a Luttinger liquid with short ranged mutual interactions amongst the particles and a cluster of impurities centered around an origin. The Hamiltonian of the system is given as follows.\n\\small\n\\begin{equation}\n\\begin{aligned}\nH =& \\int^{\\infty}_{-\\infty} dx \\mbox{ } \\psi^{\\dagger}(x) \\left( - \\frac{1}{2m} \\partial_x^2 + V(x) \\right) \\psi(x)\\\\\n & \\hspace{1cm} + \\frac{1}{2} \\int^{ \\infty}_{-\\infty} dx \\int^{\\infty}_{-\\infty} dx^{'} \\mbox{ }v(x-x^{'}) \\mbox{ }\n \\rho(x) \\rho(x^{'})\n\\label{Hamiltonian}\n\\end{aligned}\n\\end{equation}\n\\normalsize\nThe first term is the kinetic term followed by the potential energy term which represents the impurity cluster which is modeled as a finite sequence of barriers and wells around a fixed point. The potential cluster can be as simple as one delta impurity $V_0\\delta(x)$ or two delta impurities placed close to each other $V_0( \\delta(x+a)+\\delta(x-a))$, finite barrier\/well $\\pm V \\theta(x+a)\\theta(a-x)$ and so on, where $\\theta(x)$ is the Heaviside step function. The RPA (random phase approximation) is imposed on the system, without which the calculation of the analytical expressions of the correlation functions is formidable. In this limit, the Fermi momentum and the mass of the fermion are allowed diverge in such a way that their ratio, viz., the Fermi velocity is finite (i.e. $ k_F, m \\rightarrow \\infty $ but $ k_F\/m = v_F < \\infty $). Under the choice of units: $ \\hbar = 1 $, $ k_F $ is both the Fermi momentum as well as a wavenumber \\cite{stone1994bosonization}. The RPA limit linearizes the energy momentum dispersion near the Fermi surface ($E=E_F+p v_F$ instead of $E=p^2\/(2m)$). It is also imperative to define how the width of the impurity cluster `2a' scales in the RPA limit and the assertion is that $ 2 a k_F < \\infty $ as $ k_F \\rightarrow \\infty $. On the other hand, the heights and depths of the various barriers\/wells are assumed to be in fixed ratios with the Fermi energy $ E_F = \\frac{1}{2} m v_F^2 $ even as $ m \\rightarrow \\infty $ with $ v_F < \\infty $. \n\nIn case of the different potentials consisting the cluster, the only quantities that will be used in the calculation of the Green functions is the reflection (R) and transmission (T) amplitudes which can be easily calculated using elementary quantum mechanics and are provided in an earlier work \\cite{das2018quantum}. For instance, in the case of a single delta potential: $V_0\\delta(x)$,\n\\scriptsize\n\\begin{equation}\n\\begin{aligned}\nT=&\\frac{1}{\\left(1+V_0 \\frac{i}{v_F}\\right)}\\mbox{ };\\mbox{ }\nR=-\\frac{iV_0}{v_F\\left(1+V_0 \\frac{i}{v_F}\\right)} \\\\\n\\end{aligned}\n\\end{equation}\n\\normalsize\nIn the case of a double delta potential separated by a distance 2a between them : $V_0( \\delta(x+a)+\\delta(x-a))$,\n\\scriptsize\n\\begin{equation}\n\\begin{aligned}\nT=&\\frac{1}{\\left(1+V_0 \\frac{i}{v_F}\\right)^2-\\left(\\frac{i V_0}{v_F}e^{i 2 k_F a}\\right)^2}\\\\\nR=&-\\frac{2i\\frac{V_0^2}{v_F^2} \\sin{[2 k_F a]} +\\frac{2i V_0}{v_F}\\cos{[2 k_F a]}}{\\left(1+V_0 \\frac{i}{v_F}\\right)^2-\\left(\\frac{i V_0}{v_F}e^{i 2 k_F a}\\right)^2} \\\\\n\\end{aligned}\n\\end{equation}\n\\normalsize\nIn this work the generalized notion of R and T is used in this work to signify the reflection and transmission amplitudes of the cluster of impurities in consideration. The third term in equation (\\ref{Hamiltonian}) represents the forward scattering mutual interaction term such that\n\\[ \n\\hspace{2 cm} v(x-x^{'}) = \\frac{1}{L} \\sum_{q} v_q \\mbox{ }e^{ -i q(x-x^{'}) } \n\\]\nwhere $ v_q = 0 $ if $ |q| > \\Lambda $ for some fixed bandwidth $ \\Lambda \\ll k_F $ and $ v_q = v_0 $ is a constant, otherwise.\\\\\n\n\\section{ Non chiral bosonization and two point functions}\nAs in conventional bosonization schemes using the operator approach \\cite{giamarchi2004quantum}, the fermionic field operator is expressed in terms of currents and densities. But in NCBT the field operator is modified to include the effect of back-scattering by impurities. Hence it is suitable to study translationally non invariant systems like the ones considered in this work.\n\\begin{equation}\n\\begin{aligned}\n\\psi_{\\nu}(x,\\sigma,t) \\sim C_{\\lambda ,\\nu,\\gamma}\\mbox{ }e^{ i \\theta_{\\nu}(x,\\sigma,t) + 2 \\pi i \\lambda \\nu \\int^{x}_{sgn(x)\\infty}\\mbox{ } \\rho_s(-y,\\sigma,t) dy}\n\\label{PSINU}\n\\end{aligned}\n\\end{equation}\nHere $\\theta_{\\nu}$ is the local phase which is a function of the currents and densities which is also present in the conventional bosonization schemes \\cite{giamarchi2004quantum}, ideally suited for homogeneous systems.\n\\small\n\\begin{equation}\n\\begin{aligned}\n\\theta_{\\nu}(x,\\sigma,t) =& \\pi \\int^{x}_{sgn(x)\\infty} dy \\bigg( \\nu \\mbox{ } \\rho_s(y,\\sigma,t)\\\\\n&\\hspace{1 cm} - \\int^{y}_{sgn(y)\\infty} dy^{'} \\mbox{ }\\partial_{v_F t } \\mbox{ }\\rho_s(y^{'},\\sigma,t) \\bigg)\n\\end{aligned}\n\\end{equation}\\normalsize\nThe new addition in equation (\\ref{PSINU}) is the optional term $\\rho_s(-y)$ which ensures the necessary trivial exponents for the single particle Green functions for a system of otherwise free fermions with impurities, which are obtained using standard Fermi algebra and they serve as a basis for comparison for the Green functions obtained using bosonization. The adjustable parameter is the quantity $\\lambda$ which can take values either 0 or 1 as per requirement. Thus NCBT operator reduces to standard bosonization operator used in g-ology methods by setting $\\lambda=0$. The factor $2 \\pi i$ ensures that the fermion commutation rules are obeyed. The quantities $C_{\\lambda ,\\nu,\\gamma}$ are pre-factors and are fixed by comparison with the non-interacting Green functions obtained using Fermi algebra. The suffix $\\nu$ signifies a right mover or a left mover and takes values 1 and -1 respectively. The field operator as given in equation (\\ref{PSINU}) is to be treated as a mnemonic to obtain the Green functions and not as an operator identity, which avoids the necessity of the Klein factors that are conventionally used to conserve the number as the correlation functions, unlike the field operators, are number conserving. The field operator (annihilation) is clubbed together with another such field operator (creation) to obtain the non interacting two point functions after fixing the C's and $\\lambda$'s. Finally the densities $\\rho$'s in the RHS of equation (\\ref{PSINU}) are replaced by their interacting versions to obtain the many body Green functions, the details being described in an earlier work \\cite{das2018quantum}. The two point functions obtained using NCBT are given in \\hyperref[AppendixA]{Appendix A}.\n\n\n\n\\section{Conductance }\n\\subsection{Kubo conductance}\nThe general formula for the conductance of a quantum wire (obtained from Kubo's formula that relates it to current-current correlations) without leads but with electrons experiencing forward scattering short-range mutual interactions\nand in the presence of a finite number of barriers and wells clustered around an origin is obtained.\nConsider an electric field $ E(x,t) = \\frac{ V_g }{ L} $ between $ -\\frac{L}{2} < x < \\frac{L}{2} $\nand $ E(x,t) = 0 $ for $ |x| > \\frac{L}{2} $. Here $ V_g $ is the voltage between two extreme points.\nThus a d.c. situation is being considered right from the start. This corresponds to a vector potential,\n\\begin{equation}\n\\begin{aligned}\nA(x,t) = \\left\\{\n \\begin{array}{ll}\n -\\frac{ V_g }{ L} (ct), & \\hbox{ $ -\\frac{L}{2} < x < \\frac{L}{2} $ ;} \\\\\n \\hspace{.3cm}0, & \\hbox{otherwise.}\n \\end{array}\n\\right.\n\\end{aligned}\n\\end{equation}\nHere c is the speed of light. This means the average current can be written as,\n\\begin{equation}\n\\begin{aligned}\n = &\\frac{ie}{c}\\sum_{ \\sigma^{'} }\n\\int^{L\/2}_{-L\/2} dx^{'}\\mbox{ } \\int_{-\\infty}^{t} dt^{'}\n\\mbox{ }\\frac{ V_g }{ L} (ct') \\\\\n&< [j(x,\\sigma,t),j(x^{'},\\sigma^{'},t^{'})]>_{LL}\n\\end{aligned}\n\\end{equation}\nThe current current correlation can be obtained using the Green functions derived in the present work (see \\hyperref[AppendixB]{Appendix B}) to obtain the formula for conductance (in proper units) as follows,\n\\begin{equation}\nG = \\frac{ e^2 }{h} \\frac{v_F }{ v_h } \\mbox{ }\\bigg (1- \\frac{v_F }{v_h} \\mbox{ }\\frac{|R|^2}{1-\\frac{(v_h-v_F)}{v_h}|R|^2}\\bigg)\n\\label{kubo}\n\\end{equation}\n\nHere $v_F$ is the Fermi velocity, \\scriptsize $ v_h = \\sqrt{v_F^2+2v_F v_0\/\\pi} $ \\normalsize is the holon velocity and $v_0$ is the strength of interaction between fermions as already described in Section 2. See \\hyperref[AppendixB]{Appendix B} for more details.\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.3]{conductanceL}\n \\caption{Conductance as a function of the absolute value of the reflection amplitude as well the interaction parameter ($ v_F = 1 $)}\\label{Cond3D}\n\\end{figure}\n\\noindent The Kubo conductance formula obtained in equation (\\ref{kubo}) is plotted in fig. \\ref{Cond3D} as a function of the reflection coefficient and interaction strength. It can be seen that when the reflection coefficient becomes unity ($|R|=1$), then the conductance vanishes irrespective of the interaction parameter. On the other hand, for any fixed value of $|R|$, the conductance increases as the mutual interaction becomes more and more attractive (negative $v_0$) and decreases as the interaction becomes more and more repulsive (positive $v_0$). On the other hand for a fixed value of interaction parameter, the conductance decreases with increase in the reflection parameter.\n\n\\subsubsection{Limiting cases.}\n{\\bf No interaction}. In absence of interactions $v_0=0$ and hence $v_h=v_F$ and thus from equation (\\ref{kubo}),\n\\[\n\\hspace{2cm}G = \\frac{ e^2 }{h} (1 - |R|^2) = \\frac{ e^2 }{h} |T|^2\n\\]\n which is the Landauer's formula for conductance.\n\n{\\bf No impurity} In this case, there is no reflection and hence $|R|=0$ and thus from equation (\\ref{kubo}),\n\\[\n\\hspace{2cm}G = \\frac{ e^2 }{h} \\frac{v_F}{v_h} =\\frac{ e^2 }{h} g\n\\]\nwhich the renormalized conductance of an infinite Luttinger liquid (with parameter g).\n\n{\\bf Infinite barrier}\nIn the case of a half line, $|R|=1$ and thus from equation (\\ref{kubo}),\n\\[\n\\hspace{1 in}G=0\n\\]\nirrespective of the value of holon velocity $v_h$.\\\\ \n\n\n\n\n\\subsection{Tunneling conductance} The Kubo conductance is the linear response to external potentials and is therefore related to four-point correlation functions of fermions. Alternatively, conductance may also be thought of the outcome of a tunneling experiment \\cite{kane1992transport}.\nHere fermions are injected from one end and collected from the other end. In this sense the conductance is related to the two-point function or the single particle Green function. Thus we expect these two notions to be qualitatively different from each other. From this point of view, the conductance is ($|T| $ is the magnitude of the transmission amplitude for free fermions plus impurity) ,\n\\begin{equation}\nG = \\frac{ e^2 }{h } |T| \\mbox{ }\n| v_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ R } ( \\frac{L}{2},\\sigma,t) , \\psi^{\\dagger}_{ R } (-\\frac{L}{2},\\sigma,0) \\}>\n |\n \\label{TUNNEL1}\n\\end{equation}\nIn this case the results depend on the length of the wire $ L $ and a cutoff $ L_{\\omega} = \\frac{ v_F }{ k_B T } $ that may be regarded either as inverse temperature or inverse frequency (in case of a.c. conductance). The result (derived in \\hyperref[AppendixB]{Appendix B}) is\n\\begin{equation}\nG \\sim \\left( \\frac{ L }{ L_{ \\omega } }\\right)^{-2Q } \\mbox{ } \\left( \\frac{ L }{ L_{ \\omega} }\\right)^{ 4X }\n\\label{GGEN}\n\\end{equation}\nHere Q and X are obtained from equation (\\ref{luttingerexponents}). It is important to stress that the present work has carefully defined tunneling conductance and it is not simply related to the dynamical density of states of either the bulk or the half line (\\hyperref[AppendixB]{Appendix B}). Of particular interest is the weak link limit where $ |R| \\rightarrow 1 $. The limiting case of the weak link are two semi-infinite wires.\nIn this case,\n\\begin{equation}\nG_{weak-link} \\sim \\left( \\frac{ L }{ L_{ \\omega} }\\right)^{ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } }\n\\label{GGEN}\n\\end{equation}\nHence the d.c. conductance scales as $ G_{weak-link} \\sim (k_B T)^{ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } } $. This formula is consistent with the\nassertions of Kane and Fisher \\cite{kane1992transport} that show that at low temperatures $ k_B T \\rightarrow 0 $\nfor a fixed $ L $, the conductance vanishes as a power law in the temperature if the interaction between the fermions is repulsive ($ v_h > v_F > 0 $) and diverges as a power law if the interactions between the fermions is attractive ($ v_F > v_h > 0 $). Their result is applicable to spinless fermions without leads $ G_{weak-link-nospin} \\sim (k_B T)^{ \\frac{2}{K} - 2 } $. In order to compare with the result of the present work, this exponent has to be halved $ G_{weak-link-with-spin} \\sim (k_B T)^{ \\frac{1}{K_{ \\rho } } - 1 } $. This exponent is the same as the exponent of the present work so long as $ |v_h-v_F| \\ll v_F $ ie. $ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } \\approx \\frac{1}{K_{ \\rho } } - 1 $ since $ K_{\\rho} = \\frac{ v_F }{v_h} $.\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.35]{conductanceT}\n \\caption{Conductance exponent $\\eta$ as a function of the absolute value of the reflection amplitude $|R|$ and the ratio $\\beta=\\frac{v_h}{v_F}$. }\\label{eta}\n\\end{figure}\nIn general, the claim of the present work is that the temperature dependence of the tunneling d.c. conductance of a wire with no leads and in the presence of barriers and wells and mutual interaction between particles (forward scattering, infinite bandwidth ie. $ k_F \\gg \\Lambda_b \\rightarrow \\infty $) is,\n\\begin{equation}\nG \\sim (k_B T)^{ \\eta} ;\\mbox{ }\\mbox{ } \\mbox{ } \\eta = 4X - 2 Q\n\\label{Cond}\n\\end{equation}\nWhen $ \\eta > 0 $ the conductance vanishes at low temperatures as a power law - characteristic of a weak link. However when $ \\eta < 0 $ the conductance diverges at low temperature as a power law - characteristic of a clean quantum wire. Of special interest is the situation $ \\eta = 0 $ where the conductance is independent of temperature. This crossover from a conductance that vanishes as a power law at low temperatures to one that diverges as a power law occurs at reflection coefficient\n $ |R|^2 = |R_{c2}|^2 \\equiv \\frac{v_h (v_h-v_F)}{3 v_F^2+v_h^2} $ which is valid only for repulsive interactions $ v_h > v_F $. For attractive interactions, $ \\eta < 0 $ for any $ |R|^2 $ which means\n the conductance always diverges as a power law at low temperatures. This means attractive interactions heal the chain for all reflection coefficients including in the extreme weak link case.\n On the other hand for repulsive interactions, for $ |R| > |R_{c2}| $, $ \\eta > 0 $ the chain is broken (conductance vanishes) at low temperatures. For $ |R| < |R_{c2}| $, $ \\eta < 0 $ and even though the interactions are repulsive the chain is healed (conductance diverges).\n\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[scale=0.12]{DoubleDelta_PQX}\\hspace{0.5 cm}\n\\includegraphics[scale=0.17]{DoubleDelta_SYZ}\\hspace{0.5 cm}\n\\includegraphics[scale=0.17]{DoubleDelta_ABCD}\n\n\\scriptsize (a) \\hspace{5 cm}(b)\\hspace{5 cm} (c)\n\\end{center}\n\\caption{ Anomalous exponents (L.E) vs impurity strength $V_0$ for symmetric double barrier: (a) Exponents for $\\langle \\psi_R(X_1) \\psi_R^{\\dagger}(X_2)\\rangle$ on the same side (b) Exponents for $\\langle \\psi_R(X_1) \\psi_L^{\\dagger}(X_2)\\rangle$ on the same side (c) Exponents for $\\langle \\psi_R(X_1) \\psi_R^{\\dagger}(X_2)\\rangle$ on opposite sides.}\n\\label{resonance}\n\\end{figure*}\n\n\\subsubsection{ Derivation of RG equation for the tunneling conductance }\n\nIn the well-cited work of Matveev et al \\cite{matveev1993tunneling}, the RG equation for the tunneling conductance is derived which is valid for weak mutual interaction between fermions (they consider both forward scattering as well as backward scattering but in the present work we consider only forward scattering between fermions but of arbitrary strength and sign subject to the limitation that the holon velocity be real). Both in their work and in the present work the transmission amplitude of free fermions can vary continuously between zero and unity i.e. it is not constrained in any way. Note that we have chosen an infinite bandwidth to derive the power-law conductance in equation (\\ref{Cond}). Had we chosen a finite bandwidth while calculating equation (\\ref{TUNNEL1}), the resulting expressions would be considerably more complicated as Matveev et al have also found. We shall postpone a proper discussion of this interesting question to a later publication. For now we look at equation (8) of their paper rather than equation (12) since we are interested in the large bandwidth case only for now. Since $ G \\sim {\\mathcal{T}} $ in their notation, we may expand the conductance exponent $ 4X - 2 Q $ in powers of $ v_0 $ the forward scattering mutual interaction between fermions to leading order (in the notation of Matveev et al this is $ V(0) $ and $ V(2k_F) \\equiv 0 $ in the present work),\n\\begin{equation}\n\\frac{ \\delta {\\mathcal{T}} }{ {\\mathcal{T}}_0 } \\approx 4 X \\mbox{ } \\log(\\omega)\\approx {\\mathcal{R}}_0\\frac{ v_0 }{ \\pi v_F} \\mbox{ } \\log(\\omega)\n\\label{Matveev}\n\\end{equation}\nfor $ |v_0| \\ll v_F $.\nwhere $ {\\mathcal{R}}_0 = 1 - {\\mathcal{T}}_0 $ (in the notation of the present work this would be $ |R|^2 = 1- |T|^2 $ and $ \\omega \\rightarrow |k-k_F|d \\sim k_BT $. The equation (\\ref{Matveev}) is precisely equation (8) of Matveev et al. Thus mutually interacting fermions renormalize the impurities but isolated impurities do not renormalize the homogenous Luttinger parameters such as $ K = \\frac{v_F}{v_h} $. Note that our results for the conductance equation (\\ref{Cond}) is the {\\it{ end result }} of properly taking into account the renormalizations to all orders in the infinite-bandwidth-forward-scattering fermion-fermion interactions with no restriction on the bare transmission coefficient of free fermions plus impurity. The final answers of equation (\\ref{Cond}) involve only the bare transmission and reflection coefficients for the same reason why the zero point energy of the harmonic oscillator derived properly using Hermite polynomials (rather than using perturbative RG around free particle, say) involves the bare spring constant (ie. $ \\frac{1}{2} \\hbar \\sqrt{\\frac{ k }{m } } $). Incidentally, even the final answers of Matveev et al. such as their equation (13) involve the bare parameters only since this formula is the {\\it{end result}} of taking into account all the renormalization properly.\n\n It is hard to overstate the importance of these results. They show that it is possible to analytically interpolate between the weak barrier and weak link limits without involving RG techniques. It also shows that NCBT is nothing but non-perturbative RG in disguise.\n\n\n\\section{Resonant tunneling across a double barrier}\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[scale=0.45]{Plots_asymmetric_double_delta_X}\\hspace{1.5cm}\n\\includegraphics[scale=0.45]{Plots_asymmetric_double_delta_A}\\\\\n(a)\\hspace{7cm}(b)\\\\\n\\end{center}\n\\caption{ Anomalous exponents for double barrier: The anomalous exponents (a) X and (b) A as functions of impurity strength $V_1$ and $V_2$ for an asymmetric double delta potential. Near resonance (the point of intersection of the cross lines), the system has the same colour it has when both $V_1$ and $V_2$ are zero.}\n\\label{densityplot}\n\\end{figure*}\n\nResonant tunneling is well-known in elementary quantum mechanics. Typically, this phenomenon is studied in a double-barrier system. When the Fermi wavenumber bears a special relation with the inter-barrier separation and height, the reflection coefficient becomes zero and the Green functions of the system behave as if they are those of a translationally invariant system. Consider a symmetric double delta-function with strength $V_0 $ and separation $ d $. Define, $ \\xi_0 = k_F d $. The resonance condition in this case is well-known to be,\n\n\\begin{equation}\n\\hspace{0.5 in }V_0 \\sin{[\\xi_0]} +v_F\\cos{[\\xi_0]}=0 \\label{eq:cond}\n\\end{equation}\nResonant tunneling is studied for a square double barrier potential in one dimensions by Zhi Xiao et al. \\cite{xiao2012revisiting}. After taking the limiting cases of the square barriers tending to delta potentials and imposing the RPA limit, equation (\\ref{eq:cond}) is obtained.\n\nThe anomalous exponents of the correlation functions given in \\hyperref[AppendixA]{Appendix A} are plotted in fig. \\ref{resonance} in the vicinity of resonance to see the signatures of resonance tunneling on the Luttinger liquid Green function.\nIt may be seen that when the system is at resonance (depicted by the vertical line), all the anomalous exponents take exactly the same value that they take when there is no barrier at all.\\\\\n\nFor an asymmetric double delta system, $V(x)=V_1 \\delta(x+a)+ V_2 \\delta(x-a)$, the anomalous exponents can be calculated using NCBT. The form of the exponents are the same as given in \\hyperref[AppendixA]{Appendix A} but the expression of the reflection amplitude is now different and is given by (here $\\xi_0= 2 k_F a$) \\cite{das2018quantum}.\n\\begin{equation}\n\\begin{aligned}\n\\label{asymmetric}\nR=&-\\frac{2 i \\frac{V_1 V_2}{v_F^2} \\sin{[\\xi_0]}+\\frac{2i}{v_F}(\\frac{V_1 e^{i \\xi_0}+V_2e^{-i\\xi_0}}{2})}{\\left(1+i\\frac{V_1+V_2}{v_F}+\\frac{i^2 V_1V_2}{v_F^2}\\right)+\\frac{V_1V_2}{v_F^2}e^{2 i \\xi_0}}\\\\\n\\end{aligned}\n\\end{equation}\n For this case also resonance is achieved when both $V_1$ and $V_2$ becomes equal ($V_1=V_2=V_0$) and $V_0$ obeys the same condition in equation $(\\ref{eq:cond})$. Two of the anomalous exponents X and A (expressions given in equations (\\ref{luttingerexponents}) and (\\ref{asymmetric})) for the asymmetric double delta system are plotted in fig. (\\ref{densityplot}). The point of intersection of the cross lines is the condition for resonance and it can easily be seen that the exponent takes the same value (color) at resonance point as it otherwise takes for the no-impurity system ($V_1=V_2=0$). \n\n\\section{Conclusion}\nThe correlation functions of an inhomogeneous Luttinger liquid obtained using the Non chiral bosonization are successfully used to calculate the conductance in the Kubo formalism as well as in a tunneling experiment. The formulas are valid for any strength of the impurities as well as that of the inter-particle interactions and various standard results are obtained as limiting cases of these formulas. The condition of resonant tunneling is also obtained and the behavior of the correlation functions near resonance is described. \n\n\n\\section*{APPENDIX A: Two point functions using NCBT}\n\\label{AppendixA}\n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{A.\\arabic{equation}}\n\nThe full Green function is the sum of all the parts. The notion of weak equality is introduced which is denoted by \\begin{small} $ A[X_1,X_2] \\sim B[X_1,X_2] $ \\end{small}. This really means \\begin{small} $ \\partial_{t_1} Log[ A[X_1,X_2] ] = \\partial_{t_1} Log[ B[X_1,X_2] ] $\\end{small} assuming that A and B do not vanish identically. In addition to this, the finite temperature versions of the formulas below can be obtained by replacing $ Log[Z] $ by $ Log[ \\frac{\\beta v_F }{\\pi}Sinh[ \\frac{\\pi Z}{ \\beta v_F} ] ] $ where $ Z \\sim (\\nu x_1 - \\nu^{'} x_2 ) - v_a (t_1-t_2) $ and singular cutoffs ubiquitous in this subject are suppressed in this notation for brevity - they have to be understood to be present. {\\bf Notation:} $X_i \\equiv (x_i,\\sigma_i,t_i)$ and $\\tau_{12} = t_1 - t_2$. \n\\scriptsize\n\n\\begin{equation}\n\\begin{aligned}\n\\Big\\langle T\\mbox{ }\\psi(X_1)\\psi^{\\dagger}(X_2) \\Big\\rangle \n=&\\Big\\langle T\\mbox{ }\\psi_{R}(X_1)\\psi_{R}^{\\dagger}(X_2) \\Big\\rangle +\\Big \\langle T\\mbox{ }\\psi_{L}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\\\\n+&\\Big\\langle T\\mbox{ }\\psi_{R}(X_1)\\psi_{L}^{\\dagger}(X_2) \\Big\\rangle + \\Big\\langle T\\mbox{ }\\psi_{L}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\\\\n\\label{break}\n\\end{aligned}\n\\end{equation}\n\n\\small\n\\begin{bf} Case I : $x_1$ and $x_2$ on the same side of the origin\\end{bf} \\\\ \\scriptsize\n\n\n\\begin{equation*}\n\\begin{aligned}\n\\Big\\langle T\\mbox{ }\\psi&_{R}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(4x_1x_2)^{\\gamma_1}}{(x_1-x_2 -v_h \\tau_{12})^{P} (-x_1+x_2 -v_h \\tau_{12})^{Q}} \\\\\n\\times&\\frac{1}{ (x_1+x_2 -v_h \\tau_{12})^{X} (-x_1-x_2 -v_h \\tau_{12})^{X} (x_1-x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{L}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(4x_1x_2)^{\\gamma_1}}{(x_1-x_2 -v_h \\tau_{12})^{Q} (-x_1+x_2 -v_h \\tau_{12})^{P}} \\\\\n\\times&\\frac{1}{ (x_1+x_2 -v_h \\tau_{12})^{X} (-x_1-x_2 -v_h \\tau_{12})^{X}(-x_1+x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{R}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(2x_1)^{\\gamma_1}(2x_2)^{1+\\gamma_2}+(2x_1)^{1+\\gamma_2}(2x_2)^{\\gamma_1}}{2(x_1-x_2 -v_h \\tau_{12})^{S} (-x_1+x_2 -v_h \\tau_{12})^{S}} \\\\\n\\times&\\frac{1}{ (x_1+x_2 -v_h \\tau_{12})^{Y} (-x_1-x_2 -v_h \\tau_{12})^{Z}(x_1+x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\end{aligned}\n\\end{equation*}\n\n\\begin{equation}\n\\begin{aligned}\n\\Big\\langle T\\mbox{ }\\psi&_{L}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(2x_1)^{\\gamma_1}(2x_2)^{1+\\gamma_2}+(2x_1)^{1+\\gamma_2}(2x_2)^{\\gamma_1}}{2(x_1-x_2 -v_h \\tau_{12})^{S} (-x_1+x_2 -v_h \\tau_{12})^{S}} \\\\\n\\times&\\frac{1}{ (x_1+x_2 -v_h \\tau_{12})^{Z} (-x_1-x_2 -v_h \\tau_{12})^{Y}(-x_1-x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\label{SS}\n\\end{aligned}\n\\end{equation}\n\n\\small\n\\begin{bf}Case II : $x_1$ and $x_2$ on opposite sides of the origin\\end{bf} \\\\ \\scriptsize\n\n\\begin{equation}\n\\begin{aligned}\n\\Big\\langle T\\mbox{ }\\psi&_{R}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(2x_1)^{1+\\gamma_2}(2x_2)^{\\gamma_1} }{2(x_1-x_2 -v_h \\tau_{12})^{A} (-x_1+x_2 -v_h \\tau_{12})^{B}} \\\\\n\\times&\\frac{(x_1+x_2)^{-1}(x_1+x_2 + v_F \\tau_{12})^{0.5}}{ (x_1+x_2 -v_h \\tau_{12})^{C} (-x_1-x_2 -v_h \\tau_{12})^{D} (x_1-x_2 -v_F \\tau_{12})^{0.5}}\\\\\n&\\hspace{2cm}+\\frac{(2x_1)^{\\gamma_1} (2x_2)^{1+\\gamma_2}}{2(x_1-x_2 -v_h \\tau_{12})^{A} (-x_1+x_2 -v_h \\tau_{12})^{B}} \\\\\n\\times&\\frac{(x_1+x_2)^{-1}(x_1+x_2 - v_F \\tau_{12})^{0.5}}{ (x_1+x_2 -v_h \\tau_{12})^{D} (-x_1-x_2 -v_h \\tau_{12})^{C} (x_1-x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{L}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\sim \n\\frac{(2x_1)^{1+\\gamma_2}(2x_2)^{\\gamma_1} }{2(x_1-x_2 -v_h \\tau_{12})^{B} (-x_1+x_2 -v_h \\tau_{12})^{A}} \\\\\n\\times&\\frac{(x_1+x_2)^{-1}(x_1+x_2 - v_F \\tau_{12})^{0.5}}{ (x_1+x_2 -v_h \\tau_{12})^{D} (-x_1-x_2 -v_h \\tau_{12})^{C} (-x_1+x_2 -v_F \\tau_{12})^{0.5}}\\\\\n&\\hspace{2cm}+\\frac{(2x_1)^{\\gamma_1} (2x_2)^{1+\\gamma_2}}{2(x_1-x_2 -v_h \\tau_{12})^{B} (-x_1+x_2 -v_h \\tau_{12})^{A}} \\\\\n\\times&\\frac{(x_1+x_2)^{-1}(x_1+x_2 + v_F \\tau_{12})^{0.5}}{ (x_1+x_2 -v_h \\tau_{12})^{C} (-x_1-x_2 -v_h \\tau_{12})^{D} (-x_1+x_2 -v_F \\tau_{12})^{0.5}}\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{R}(X_1)\\psi_{L}^{\\dagger}(X_2)\\Big\\rangle \\sim \\mbox{ }0\\\\\n\\Big\\langle T\\mbox{ }\\psi&_{L}(X_1)\\psi_{R}^{\\dagger}(X_2)\\Big\\rangle \\sim \\mbox{ }0\\\\\n\\label{OS}\n\\end{aligned}\n\\end{equation}\n\\normalsize\nwhere\n\\footnotesize\n\\begin{equation}\nQ=\\frac{(v_h-v_F)^2}{8 v_h v_F} \\mbox{ };\\mbox{ } X=\\frac{|R|^2(v_h-v_F)(v_h+v_F)}{8 v_h (v_h-|R|^2 (v_h-v_F))} \\mbox{ };\\mbox{ }C=\\frac{v_h-v_F}{4v_h}\n\\label{luttingerexponents}\\end{equation}\n\\normalsize\nThe other exponents can be expressed in terms of the above exponents.\n\\footnotesize\n\\begin{equation*}\n\\begin{aligned}\n&P= \\frac{1}{2}+Q \\mbox{ };\\hspace{0.8 cm} S=\\frac{Q}{C}( \\frac{1}{2}-C) \\mbox{ };\\hspace{0.85 cm} Y=\\frac{1}{2}+X-C ; \\\\\n& Z=X-C\\mbox{ };\\hspace{0.8 cm} A=\\frac{1}{2}+Q-X \\mbox{ };\\hspace{0.8 cm} B=Q-X \\mbox{ };\\hspace{1 cm} \\\\\n&D=-\\frac{1}{2}+C \\mbox{ };\\hspace{.6 cm} \\gamma_1=X \\mbox{ };\\hspace{1.65 cm} \\gamma_2=-1+X+2C;\\\\\n\\end{aligned}\n\\end{equation*}\n\\normalsize\n\\section*{APPENDIX B: Conductance of a quantum wire}\n\\label{AppendixB}\n\\setcounter{equation}{0}\n\\renewcommand{\\theequation}{B.\\arabic{equation}}\n\n\nIn this section, the conductance of a quantum wire with no leads is discussed first using Kubo's formula and next using the idea that it is the outcome of a tunneling experiment.\n\\subsection{Kubo formalism}\nThe electric field is $ E(x,t) = \\frac{ V_g }{ L} $ between $ -\\frac{L}{2} < x < \\frac{L}{2} $ and $ E(x,t) = 0 $ for $ |x| > \\frac{L}{2} $. Here $ V_g $ is the Voltage between two extreme points. Thus a d.c. situation is being considered right from the start. This corresponds to a vector potential ( c is the velocity of light),\n\\[\nA(x,t) = \\left\\{\n \\begin{array}{ll}\n -\\frac{ V_g }{ L} (ct), & \\hbox{ $ -\\frac{L}{2} < x < \\frac{L}{2} $ ;} \\\\\n 0, & \\hbox{otherwise.}\n \\end{array}\n\\right.\n\\]\nThis means (since $ j \\approx j_s $, the slow part) ,\n\\begin{equation}\n\\begin{aligned}\n = &\\frac{ie}{c}\\sum_{ \\sigma^{'} }\n\\int^{L\/2}_{-L\/2} dx^{'}\\mbox{ } \\int_{-\\infty}^{t} dt^{'} \\\\\n&\\times\\frac{ V_g }{ L} (ct') < [j(x,\\sigma,t),j(x^{'},\\sigma^{'},t^{'})]>_{LL}\n\\label{gencond}\n\\end{aligned}\n\\end{equation}\n\n\\subsubsection{ Clean wire: $ |R| = 0 $ but $ v_0 \\neq 0 $ }\nUsing the Green function from equation (\\ref{SS}) and setting $|R|=0$, the current current commutation relation can be calculated as,\n\\footnotesize\n\\begin{equation}\n\\begin{aligned}\n<[j_s&(x,\\sigma,t),j_s(x',\\sigma',t')]> \n=-\\frac{v^2_F }{ 8\\pi^2 } \\mbox{ } \\sum_{ \\nu = \\pm 1 }\n (2 \\pi i) \\\\\n&\\partial_{ v_F t' }\\left( \\delta( x-x' + \\nu v_h(t-t') ) + \\sigma \\sigma' \\mbox{ }\\delta( x-x' + \\nu v_F(t-t') ) \\right)\n\\label{cleancond}\n\\end{aligned}\n\\end{equation}\n\\normalsize\nInserting equation (\\ref{cleancond}) into equation (\\ref{gencond}), the following is obtained.\\footnotesize\n\\begin{equation*}\n\\begin{aligned}\n = \\frac{ie}{c}\\sum_{ \\sigma^{'} }\n\\int^{\\frac{L}{2}}_{-\\frac{L}{2}} dx^{'} \\int_{-\\infty}^{t} dt^{'} \\mbox{ }\\frac{ V_g }{ L} (ct') \n\\Big(\\frac{-v^2_F }{ 8\\pi^2 } \\mbox{ } \\sum_{ \\nu = \\pm 1 }\n (2 \\pi i) \\mbox{ }\\\\\n\\times&\\partial_{ v_F t' }\\left( \\delta( x-x' + \\nu v_h(t-t') ) \n\\mbox{ }+ \\sigma \\sigma' \n\\delta( x-x' + \\nu v_F(t-t') ) \\right)\\Big)\n\\end{aligned}\n\\end{equation*}\\normalsize\nFinally,\n\\[\n = -\n\\mbox{ } V_g \\frac{e }{ (2\\pi) } \\frac{ v_F }{ v_h}\n\\]\nor,\n\\[\nI = (-e) =\n\\mbox{ } V_g \\frac{e^2 }{ (2\\pi) } \\frac{ v_F }{ v_h}\n\\]\nThis gives the formula for the conductance (per spin) for a clean quantum wire with interactions,\n\\[\nG = \\frac{ e^2}{2\\pi } \\frac{ v_F }{ v_h}\n\\]\n\\normalsize\nor in proper units,\n\\[\n\\begin{boxed}\n{G = \\frac{ e^2}{2\\pi \\hbar}\\mbox{ } \\frac{ v_F }{ v_h} = \\frac{ e^2}{h} \\mbox{ }\\frac{ v_F }{ v_h} }\n\\end{boxed}\n\\]\nA comparison with standard g-ology with the present chosen model gives the following identifications (Eq.(2.105) of Giamarchi \\cite{giamarchi2004quantum}).\n\\begin{equation*}\n\\begin{aligned}\n&g_{1,\\perp} = g_{1,\\parallel} = 0\n\\\\&\ng_{2,\\perp} = g_{2,\\parallel} = g_{4,\\perp} = g_{4,\\parallel} = v_0\n\\\\&\ng_{ \\rho } = g_{1,\\parallel} - g_{2,\\parallel} - g_{ 2, \\perp} = 0-v_0-v_0 = -2v_0\n\\\\&\ng_{ \\sigma }= g_{1,\\parallel} - g_{2,\\parallel} + g_{ 2, \\perp}= 0-v_0+v_0 = 0\n\\\\\n&\ng_{4,\\rho} = g_{4,\\parallel}+ g_{ 4,\\perp} = 2 v_0\n\\\\&\ng_{4,\\sigma} = g_{4,\\parallel} - g_{ 4,\\perp} = 0\n\\\\&\ny_{ \\rho } = g_{ \\rho }\/( \\pi v_F ) = - \\frac{2 v_0 }{ \\pi v_F }\n\\\\&\ny_{ \\sigma } = g_{ \\sigma } \/ ( \\pi v_F ) = 0\n\\\\&\ny_{4,\\rho} = g_{4,\\rho }\/(\\pi v_F) = g_{4,\\rho }\/(\\pi v_F) = 2 v_0\/(\\pi v_F)\n\\\\&\ny_{4,\\sigma} = g_{4,\\sigma }\/(\\pi v_F) = 0\n\\end{aligned}\n\\end{equation*}\n\\begin{equation*}\n\\begin{aligned}\nu_{ \\rho } =& v_F \\sqrt{ (1+y_{4,\\rho}\/2)^2 -(y_{\\rho}\/2)^2 }\\\\\n =& v_F \\sqrt{ 1+2v_0\/(\\pi v_F) } \\equiv v_h\n\\end{aligned}\n\\end{equation*}\n\\begin{equation*}\\small\n\\begin{aligned}\nK_{ \\rho } =& \\sqrt{ \\frac{1 + y_{4,\\rho}\/2+y_{\\rho}\/2}{1 + y_{4,\\rho}\/2-y_{\\rho}\/2} }\n = \\sqrt{ \\frac{1 }{1 + 2v_0\/(\\pi v_F)} } = \\frac{ v_F }{ v_h }\n\\end{aligned}\n\\end{equation*}\\normalsize\n\\[\nu_{ \\sigma } = v_F \\sqrt{ (1+ y_{4,\\sigma}\/2)^2 - (y_{\\sigma}\/2)^2 } = v_F\n\\]\n\\[\nK_{\\sigma} = \\sqrt{ \\frac{1 + y_{4,\\sigma}\/2 + y_{\\sigma}\/2 }{1 + y_{4,\\sigma}\/2 - y_{\\sigma}\/2 } } = 1\n\\]\n\nThis gives,\n\n\\[\n\\begin{boxed}\n{G = \\frac{ e^2}{h} \\mbox{ }\\frac{ v_F }{ v_h} = \\frac{ e^2}{h} \\mbox{ } K_{\\rho}}\n\\end{boxed}\n\\]\nwhich is the standard result for a clean quantum wire.\n\n\n\n\\subsubsection{ The general case: $ |R| > 0 $ and $ v_0 \\neq 0 $ }\n\nAgain, using the Green function from equation (\\ref{SS}) for general value of $|R|$, the current current commutation relation can be calculated as,\n\\begin{equation*}\n\\begin{aligned}\n<[&j_s(x,\\sigma,t),j_s(x',\\sigma',t')]> \\\\\n=& - (2 \\pi i) \\frac{v_F v_h^2 }{ 8\\pi^2 v_h } \\mbox{ }\\partial_{v_h t'} \\sum_{ \\nu = \\pm 1 }\\bigg ( \\delta ( \\nu(x-x') + v_h(t-t') )\n\\\\&\\hspace{1.5cm}- \\frac{v_F }{v_h} \\mbox{ }Z_h\\mbox{ }\n\\delta ( \\nu(|x|+|x' |) + v_h(t-t') )\n\\bigg)\n\\\\\n&\n - (2 \\pi i)\n\\frac{\\sigma\\sigma' v_F^2}{ 8\\pi^2 } \\mbox{ } \\partial_{v_Ft'}\\sum_{ \\nu = \\pm 1 }\\bigg ( \\delta ( \\nu(x-x') + v_F(t-t') )\n\t\\\\&\\hspace{1.5cm}-|R|^2 \\delta ( \\nu(|x|+|x' |) + v_F(t-t') )\n\\bigg)\n\\end{aligned}\n\\end{equation*}\nwhere,\n\\[\n\\hspace{1 in} Z_h = \\frac{ |R|^2 }{ \\bigg( 1 - \\frac{(v_h-v_F)}{ v_h }\n |R|^2 \\bigg) }\n\\]\nThus,\n\\begin{equation*}\n\\begin{aligned}\n =& ie \\sum_{ \\sigma^{'} }\n\\int^{L\/2}_{-L\/2} dx^{'}\\mbox{ } \\int_{-\\infty}^{t} dt^{'}\\partial_{v_ht^{'}} \\mbox{ }\\frac{ V_g }{ L}\n (2 \\pi i) \\frac{v_F }{ 8\\pi^2 } \\mbox{ }\\\\\n&\\sum_{ \\nu = \\pm 1 }\\bigg ( \\theta( -\\nu(x-x') - v_h(t-t') )\n\\\\&- \\frac{v_F }{v_h} \\mbox{ }Z_h\\mbox{ }\n\\theta ( -\\nu(|x|+|x' |) - v_h(t-t') )\n\\bigg)\n\\end{aligned}\n\\end{equation*}\ntherefore,\n\\[\n =\n\\frac{2 ie }{v_h}\n V_g\n (2 \\pi i) \\frac{v_F }{ 8\\pi^2 } \\mbox{ }\\bigg (1\n- \\frac{v_F }{v_h} \\mbox{ }Z_h\n\\bigg)\n\\]\nThe conductance of a quantum wire without leads but in the presence of barriers and wells is,\n\\[\nG = \\frac{ e^2 }{(2\\pi)}\n \\frac{v_F }{ v_h } \\mbox{ }\\bigg (1\n- \\frac{v_F }{v_h} \\mbox{ }Z_h\n\\bigg)\n\\]\nHence the general formula for the conductance of a quantum wire without leads but with electrons experiencing forward scattering short-range mutual interactions\nand in the presence of a finite number of barriers and wells clustered around an origin is (in proper units),\n\\begin{equation}\n\\begin{boxed}\n{G = \\frac{ e^2 }{h}\n \\frac{v_F }{ v_h } \\mbox{ }\\bigg (1\n- \\frac{v_F }{v_h} \\mbox{ }Z_h\n\\bigg)}\n\\end{boxed}\n\\end{equation}\nThe above general formula agrees with the three well known limiting cases.\n\\\\ \\mbox{ } \\\\\n(i) when $ v_h = v_F $, Landauer's formula $ G = \\frac{ e^2 }{ h } \\mbox{ }|T|^2 $ is recovered.\n\\\\ \\mbox{ } \\\\\n(ii) when $ |R| = 0 $, the formula $ G = \\frac{ e^2 }{ h } \\mbox{ }K_{\\rho} $ is also recovered.\n\\\\ \\mbox{ } \\\\\n(iii) when $ |R| = 1 $, $ G = 0 $ regardless of what $ v_h $ is.\n\\\\ \\mbox{ } \\\\\n\n\n\\subsection{ Conductance from a tunneling experiment }\n\nIf the conduction process is envisaged as a tunneling phenomenon as against the usual Kubo formula based approach which involves relating conductance to current-current correlation, a qualitatively different formula for the conductance is obtained.\n\n\n\nFirst observe that the quantity $ |T|^2 $ and $ K_{ \\rho } $ both serve as a ``transmission coefficient\" - the former when mutual interactions are absent but barriers and wells are present and the latter vice versa. Both these may be related to spectral function of the field operator (single particle spectral function) as follows.\n\\begin{equation*}\n\\begin{aligned}\n&v_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ \\nu } (x,\\sigma,t) , \\psi^{\\dagger}_{ \\nu } (x',\\sigma,0) \\}>\n \\\\&= -(2\\pi i)\\sum_{ \\gamma,\\gamma^{'} = \\pm 1 } \\theta( \\gamma x ) \\theta( \\gamma^{'} x^{'}) g_{ \\gamma,\\gamma^{'} }(\\nu,\\nu)\n\\\\&\nv_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ \\nu } (\\nu \\frac{L}{2},\\sigma,t) , \\psi^{\\dagger}_{ \\nu } (-\\nu \\frac{L}{2},\\sigma,0) \\}>\n\\\\&= -(2\\pi i)g_{ \\nu,-\\nu }(\\nu,\\nu)\n\\\\&\nv_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ \\nu } (\\nu \\frac{L}{2},\\sigma,t) , \\psi^{\\dagger}_{ \\nu } (-\\nu \\frac{L}{2},\\sigma,0) \\}>\n = T\n\\end{aligned}\n\\end{equation*}\nwhere $g_{ \\gamma,\\gamma^{'} }(\\nu,\\nu)$ are functions of the reflection (R) and the transmission (T) amplitudes of the system and is given explicitly as follows.\n\\footnotesize\n\\begin{equation}\n\\begin{aligned}\n\\hspace*{-0.2 cm}\n\\label{gexp}\ng_{\\gamma_1,\\gamma_2} (\\nu_1,\\nu_2)=\\frac{i}{2\\pi}& \\Big[ \\delta_{\\nu_1,\\nu_2} \\delta_{\\gamma_1,\\gamma_2} \\\\\n&+(T \\delta_{\\nu_1,\\nu_2}+R \\delta_{\\nu_1,-\\nu_2})\\delta_{\\gamma_1,\\nu_1}\\delta_{\\gamma_2,-\\nu_2}\\\\\n&+(T^{*} \\delta_{\\nu_1,\\nu_2}+R^{*} \\delta_{\\nu_1,-\\nu_2})\\delta_{\\gamma_1,-\\nu_1}\\delta_{\\gamma_2,\\nu_2}\\Big]\n\\end{aligned}\n\\end{equation}\n\\normalsize\nFrom this point of view, the conductance is related to the magnitude of the above complex number. Choosing it to be proportional to the magnitude of the complex number (rather than the square of the magnitude) allows perfect agreement with the RG equations of Matveev et al. \\cite{matveev1993tunneling} as we have seen in the main text ($|T|$ is the magnitude of the transmission amplitude of free fermions plus impurity):\\small\n\\begin{equation}\nG = \\frac{ e^2 }{h} \\mbox{ }|T|\\mbox{ }\n| v_F\\int^{\\infty}_{-\\infty}dt\\mbox{ }<\\{ \\psi_{ R } ( \\frac{L}{2},\\sigma,t) , \\psi^{\\dagger}_{ R } (-\\frac{L}{2},\\sigma,0) \\}>\n |\n \\label{TUNNEL}\n\\end{equation}\\normalsize\nNote that the above formula is {\\bf{not related}} to the square of the dynamical density of states. The dynamical density of states is\nequal-space and unequal time Green function. For tunneling, an electron is injected at $ x = - L\/2 $\nand collected at $ x^{'} = + L\/2 $ as is the case here which is unequal-space unequal-time Green function i.e. the Green function for the electron traversing the impurity.\nTechnically speaking, the g-ology methods are able to handle only the no barrier case and the half line case properly hence for a weak link they are sometimes forced to surmise that conductance has something to do with dynamical density of states for a half line near the weak link. The present approach is not only different but physically more sensible and compelling. Using the Green function from equation (\\ref{OS}),\n\\footnotesize\n\\begin{equation*}\n\\begin{aligned}\n\\Big\\langle T\\psi_R(\\frac{L}{2},\\sigma,t)&\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\Big\\rangle\n=\\frac{i}{2\\pi}\\mbox{ }e^{-\\frac{1}{2} \\log{[L-v_Ft]}}\n\\\\&\\times e^{-\\frac{1}{2} \\log{[L-v_ht]}}e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2-(v_ht)^2 }{ L_{ \\omega }^2 }\\vline }}\n\\end{aligned}\n\\end{equation*}\n\\normalsize\nHence,\n\\footnotesize\n\\begin{equation*}\n\\begin{aligned}\n\\Big\\langle \\{\\psi_R(\\frac{L}{2}&,\\sigma,t),\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\}\\Big\\rangle\n=\\frac{i}{2\\pi}\\mbox{ }e^{-\\frac{1}{2} \\log{[L-v_F(t-i\\epsilon)]}}\\\\\n&\\times e^{-\\frac{1}{2} \\log{[L-v_h(t-i\\epsilon)]}}e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2-(v_h(t-i\\epsilon))^2 }{ L_{ \\omega }^2 }\\vline }}\\\\\n&\\hspace{1 in} - \\frac{i}{2\\pi}\\mbox{ }e^{-\\frac{1}{2} \\log{[L-v_F(t+i\\epsilon)]}}\n\\\\&\\times e^{-\\frac{1}{2} \\log{[L-v_h(t+i\\epsilon)]}}e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2-(v_h(t+i\\epsilon))^2 }{ L_{ \\omega }^2 }\\vline }}\n\\end{aligned}\n\\end{equation*}\\normalsize\nwhile integrating over $ t $ the only regions that contribute are $ L-v_F t \\approx 0 $ and $ L - v_h t \\approx 0 $. When $ v_h \\neq v_F $ these two are different regions. Set $ L - v_F t = y $ then $ L - v_h t =\nL - \\frac{v_h}{v_F} (L-y) $ and $ L + v_h t =\nL + \\frac{v_h}{v_F} (L-y) $. The implication is, integration over $ t $ is now integration over $ y $ and this is important only when $ y $ is close to zero. Next set $ L - v_h t = y^{'} $ then $ L + v_h t = 2L -y^{'} $ and\n $ L - v_F t =\nL - \\frac{v_F}{v_h} (L-y^{'}) $ and the integrals are important only when $ y^{'} $ is close to zero. This means,\n\\small\n\\begin{equation*}\n\\begin{aligned}\nv_F &\\int^{\\infty}_{-\\infty } dt \\mbox{ }\\Big\\langle \\{\\psi_R(\\frac{L}{2},\\sigma,t),\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\}\\Big\\rangle\n\\\\=&\n \\int^{\\infty}_{-\\infty } dy \\mbox{ }\\frac{i}{2\\pi}\\mbox{ }\n \\left( e^{-\\frac{1}{2} \\log{[y+v_Fi\\epsilon]}} - e^{-\\frac{1}{2} \\log{[y-v_Fi\\epsilon]}} \\right) \\mbox{ }\\\\\n&\\hspace{1.2cm}e^{-\\frac{1}{2} \\log{[L (1- \\frac{v_h}{v_F}) + \\frac{v_h}{v_F} y ]}}\ne^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2- \\frac{v^2_h}{v^2_F} (L-y)^2 }{ L_{ \\omega }^2 }\\vline }}\n\\\\\n+& \\frac{v_F}{v_h} \\int^{\\infty}_{-\\infty } dy^{'} \\mbox{ }\\frac{i}{2\\pi}\\mbox{ }\\left( e^{-\\frac{1}{2} \\log{[y^{'} + v_h i\\epsilon ]}} -e^{-\\frac{1}{2} \\log{[y^{'} - v_h i\\epsilon ]}} \\right)\\\\\n&\\hspace{1.2cm}e^{-\\frac{1}{2}\\log{[L (1- \\frac{v_F}{v_h}) + \\frac{v_F}{v_h} y^{'} ]}}\n \\mbox{ } e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ y^{'}(2L-y^{'}) }{ L_{ \\omega }^2 }\\vline }}\n\\end{aligned}\n\\end{equation*}\n\\normalsize\nOnly the dependence on $ L $ is of interest. Write $ y = L \\mbox{ }s $ and $ y^{'} = L \\mbox{ } s^{'}$. Hence,\n\\begin{equation*}\n\\begin{aligned}\nv_F \\int^{\\infty}_{-\\infty } dt& \\mbox{ }\\Big\\langle \\{\\psi_R(\\frac{L}{2},\\sigma,t),\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\}\\Big\\rangle\\\\\t\n&\\sim e^{-\\frac{(v_h-v_F)^2}{8 v_h v_F} \\log{\\vline \\frac{ L^2 }{ L_{ \\omega }^2 }\\vline }}\n\\end{aligned}\n\\end{equation*}\nThis means the tunneling conductance of a clean (no barrier) quantum wire scales as,\n\\begin{equation*}\n\\begin{aligned}\nG_{clean} \\sim \\frac{e^2}{h } &\\mbox{ } \\mbox{ }\ne^{-\\frac{(v_h-v_F)^2}{4 v_h v_F} \\log{\\vline \\frac{ L }{ L_{ \\omega } }\\vline }}\n\\sim& \\left( \\frac{ L_{ \\omega } }{ L } \\right)^{ \\frac{1}{4} ( K_{ \\rho } + \\frac{1}{ K_{ \\rho } } - 2 ) }\n\\end{aligned}\n\\end{equation*}\n\\normalsize\nwhere $ L_{ \\omega } = \\frac{ v_F }{ k_B T } $ is the length scale associated with temperature\n(or frequency since $ k_BT $ is interchangeable with $ \\omega $). It says that at low temperatures, the tunneling d.c. conductance of a clean quantum wire with no leads but\n with interactions ($ v_h \\neq v_F $) diverges as a power law with exponent $ \\frac{1}{4} ( K_{ \\rho } + \\frac{1}{ K_{ \\rho } } - 2 ) > 0 $.\n Fortuitously, the magnitude of this exponent matches with the exponent of the dynamical density of states of a clean wire (no impurity). However when impurities (or a weak link) is present, there is no guarantee that this coincidence will persist.\n For a clean wire there is nothing for a electron to tunnel across so this exercise is pointless. What should be studied is tunneling across a weak link. The general case involves including a finite number finite barriers and wells clustered around the origin. This case is solved elegantly here where a closed formula for the conductance exponents may be obtained\nunlike in competing approaches found in the literature where a combination of RG and other approaches are needed that fall well short of providing a closed expression for the exponents. {\\it{ More importantly, the present approach is able to provide an analytical interpolation from the weak barrier limit (see above) to the weak link limit to be discussed below - something the competing approaches are incapable of doing without solving complicated RG flow equations, often numerically. }}\n\nIn the general case with the barriers and wells, the Green function for points on opposite sides of the origin has a form that is qualitatively different from the form when the points are on the same side of the origin. This is the really striking prediction of this work.\n\n\n\\subsection{ With the impurities }\n\n\\noindent Consider the general Green function for $ xx^{'} < 0 $ (equation (\\ref{OS})). From that it is possible to conclude\n($ W = g_{1,-1}(1,1)\\theta(x)\\theta(-x')+g_{-1,1}(1,1)\\theta(-x)\\theta(x') $),\n\\begin{equation}\n\\begin{aligned}\n\n=\\frac{v_F+v_h}{2 \\sqrt{v_F v_h}} \\mbox{ }g_{1,-1}(1,1)\\mbox{ }\\\\\n&e^{(2X+2C)\\log{[L]}}\\mbox{ }e^{-\\frac{1}{2} \\log{[L-v_Ft ]}}e^{- \\frac{1}{2}\\log{[L-v_ht]}}\\\\\n&e^{- (Q-X)\n\\log{[L^2-(v_ht)^2]}}e^{-C\n\\log{[-(v_ht)^2]}}\n\\end{aligned}\n\\end{equation}\nSince $ G \\sim | v_F \\int^{ \\infty }_{-\\infty } dt <\\{\\psi_R(\\frac{L}{2},\\sigma,t),\\psi_R^{\\dagger}(-\\frac{L}{2},\\sigma,0)\\}> | $ it is possible to read off the conductance exponent as follows,\n\\begin{equation}\nG \\sim \\left( \\frac{ L }{ L_{ \\omega } }\\right)^{-2Q } \\mbox{ } \\left( \\frac{ L }{ L_{ \\omega} }\\right)^{ 4X }\n\\label{GGEN}\n\\end{equation}\nwhere $ Q=\\frac{(v_h-v_F)^2}{8 v_h v_F} $ and\n $ X=\\frac{|R|^2(v_h-v_F)(v_h+v_F)}{8 v_h (v_h-|R|^2 (v_h-v_F))} $.\n\\\\\nIt is easy to see that for a vanishing barrier $ |R| \\rightarrow 0 $, the earlier result of the conductance of a clean quantum wire is recovered. The other interesting limit is the weak link limit where $ |R| \\rightarrow 1 $. The limiting case of the weak link are two semi-infinite wires.\nIn this case,\n\\begin{equation}\nG_{weak-link} \\sim \\left( \\frac{ L }{ L_{ \\omega} }\\right)^{ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } }\n\\label{GGEN}\n\\end{equation}\nHence the d.c. conductance scales as $ G_{weak-link} \\sim (k_B T)^{ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } } $. This formula is consistent with the\nassertions of Kane and Fisher ( C. L. Kane and Matthew P. A. Fisher\nPhys. Rev. Lett. {\\bf{68}}, 1220 (1992) \\cite{kane1992transport}) that show that at low temperatures $ k_B T \\rightarrow 0 $\nfor a fixed $ L $, the conductance vanishes as a power law in the temperature if the interaction between the fermions is repulsive ($ v_h > v_F > 0 $) and diverges as a power law if the interactions between the fermions is attractive ($ v_F > v_h > 0 $). Their result is applicable to spinless fermions without leads $ G_{weak-link-nospin} \\sim (k_B T)^{ \\frac{2}{K} - 2 } $ to compare with the result of the present work this exponent has to be halved $ G_{weak-link-with-spin} \\sim (k_B T)^{ \\frac{1}{K_{ \\rho } } - 1 } $.\nThis exponent is the same as what we have derived since $ \\frac{ (v_h + v_F)^2-4v^2_F }{ 4 v_h v_F } \\approx \\frac{1}{K_{ \\rho } } - 1 $ so long as $ v_h \\approx v_F $ (weak interactions).\n In general, the claim of the present work is that the temperature dependence of the tunneling d.c. conductance of a wire with no leads in the presence of barriers and wells and mutual interaction between particles is,\n\\[\nG \\sim (k_B T)^{ \\eta} ;\\mbox{ }\\mbox{ } \\mbox{ } \\eta = 4X - 2 Q\n\\]\n\nWhen $ \\eta > 0 $ the conductance vanishes at low temperatures as a power law - characteristic of a weak link. However when $ \\eta < 0 $ the conductance diverges at low temperature as a power law - characteristic of a clean quantum wire. This result should not be taken too literally since it is based on the general validity of the surmise in Eq.(\\ref{TUNNEL}). This divergence should be taken as an indication of a saturation to a non-zero value.\nOf special interest is the situation $ \\eta = 0 $ where the conductance is independent of temperature. This crossover from a conductance that vanishes as a power law at low temperatures to one that diverges as a power law occurs at reflection coefficient\n $ |R|^2 = |R_c|^2 \\equiv \\frac{v_h (v_h-v_F)}{3 v_F^2+v_h^2} $ which is valid only for repulsive interactions $ v_h > v_F $. For attractive interactions, $ \\eta < 0 $ for any $ |R|^2 $ which means\n the conductance always diverges as a power law at low temperatures. This means attractive interactions heal the chain for all reflection coefficients including in the extreme weak link case.\n On the other hand for repulsive interactions, for $ |R| > |R_c| $, $ \\eta > 0 $ and the chain is broken (conductance vanishes) at low temperatures. For $ |R| < |R_c| $, $ \\eta < 0 $ and even though the interactions are repulsive the chain is healed (conductance diverges).\n\nNote that this section that calculates conductance is based on a serendipitous surmise equation (\\ref{TUNNEL}) which equates the tunneling conductance to a certain integral over the one-particle Green function. In hindsight, this surmise works only for temperatures small compared to the bandwidth and for repulsive interactions. Strictly speaking we have to apply a bias and properly calculate the current flowing in a system with bias, impurity, finite-bandwidth interactions and finite temperature. Not surprisingly this is an ambitious project that will lead to a proper formula for the current flowing as a function of the bias and all the other parameters. We expect to recover the RG formulas of Matveev, Yue and Glazman in the limit of weak interactions for a general bandwidth and both attractive and repulsive interactions (not infinite bandwidth repulsive interactions like we have have done in the present manuscript). The main purpose of including this section is just to support the main result namely the Green function of the system. For this the derivation of Eq.(8) of Matveev, Yue and Glazman as we have been successful in doing in the main text is already sufficient.\n\n\n\n\\section*{Funding}\nA part of this work was done with financial support from Department of Science and Technology, Govt. of India DST\/SERC: SR\/S2\/CMP\/46 2009.\\\\\n\n\\bibliographystyle{apsrev4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec_intro} Introduction}\n\\label{sec:introduction}\n\nThe study of time-dependent solutions of the one-dimensional\nSchr\\\"{o}dinger equation is a frequent topic in many\nundergraduate textbooks on quantum mechanics. The problem of a Gaussian\nor minimum-uncertainty wavepacket solution for the case of a free particle\n(defined more specifically below) is the most typical example cited, often \nbeing worked out in detail, or at least explored in problems \\cite{texts}. \nThe emphasis is often on the time-dependent position spread for such \nsolutions, typically written in the forms\n\\begin{equation}\n(\\Delta x_t)^2 = \n(\\Delta x_0)^2\\left(1+\\left(\\frac{t}{t_0}\\right)^2\\right)\n = (\\Delta x_0)^2 + \\frac{(\\Delta p_0)^2 t^2}{m^2}\n\\label{not_general_case}\n\\end{equation}\nwhere the spreading time or coherence time can be defined by $t_0 \n\\equiv m\\Delta x_0\/\\Delta p_0$. Textbooks rightly point out the essentially\nclassical nature of much of this result, explained by the fact that \nthe higher momentum components of the wave packet outpace the slower ones, \ngiving a position-spread which eventually increases linearly with time as \n$\\Delta x_t \\approx \\Delta v_0 t$, where $\\Delta v_0$ is identified with \n$\\Delta p_0\/m$.\n\nThe form of the expression for $\\Delta x_t$ in Eqn.~(\\ref{not_general_case})\nis a special case of the most general possible form of the time-dependent\nspatial width of a one-dimensional wave packet solution of the \nfree-particle Schr\\\"{o}dinger equation which is well-known in the pedagogical\nliterature \\cite{baird} - \\cite{andrews}, but seemingly found in many fewer\ntextbooks \\cite{merzbacher}. This general case can be written\nin the form\n\\begin{equation}\n(\\Delta x_t)^2 = \n(\\Delta x_0)^2 +\n\\left\\langle \n(\\hat{x}-\\langle \\hat{x} \\rangle_0)\n(\\hat{p}-\\langle \\hat{p} \\rangle_0) \n+ \n(\\hat{p}-\\langle \\hat{p} \\rangle_0)\n(\\hat{x}-\\langle \\hat{x} \\rangle_0)\n\\right\\rangle_0 \\frac{t}{m}\n+ \\frac{(\\Delta p_0^2) t^2}{m^2}\n\\label{general_case}\n\\end{equation}\nwhere the coefficient of the term linear in $t$ measures a non-trivial \ncorrelation between the momentum- and position-dependence of the initial \nwave packet. \nWhile such correlations are initially not present in the standard Gaussian\nwave packet example routinely used in textbook analyses, which therefore\ngives rise to the simpler form in Eqn.~(\\ref{not_general_case}), \na non-vanishing $x-p$ correlation does develop for later times as has \nbeen discussed in at least\none well-known text \\cite{bohm} and several pedagogical articles \n\\cite{leblond}.\n\nFor wave packets which are constructed in such a way that large momentum \ncomponents ($p > \\langle \\hat{p} \\rangle_0$) are initially preferentially \nlocated in the `back' of the packet ($x < \\langle \\hat{x}\\rangle_0$), \nthe initial correlation can, in fact, be negative\nleading to time-dependent wave packets which initially shrink in size,\nwhile the long-time behavior of any 1D free particle wave packet is indeed \nalways dominated\nby the quadratic term in Eqn.~(\\ref{general_case}), consistent with standard\nsemi-classical arguments. (We stress that we will consider here only \nlocalized wave packets which are square-integrable, for which the evaluation \nof $\\Delta x_t$ and $\\Delta p_t$ is possible, and not pure plane wave states \nnor the special non-spreading, free-particle solutions discovered by Berry \nand Balazs \\cite{berry}.)\n\nFor the standard Gaussian or minimum uncertainty wave packet used in most \ntextbook examples, and in fact for any initial wave packet of the form \n$\\psi(x,0) = R(x)\\exp(ip_0(x-x_0)\/\\hbar)$ \nwhere $R(x)$ is a real function, this initial \ncorrelation vanishes and the more familiar special case of $\\Delta x_t$\nin Eqn.~(\\ref{not_general_case}) results, leading many students to believe\nthat it is the most general result possible. \nIt is, however, very straightforward to construct initial quantum states \nconsisting of simple Gaussian wave functions, such as squeezed states or \nlinear combination of Gaussians, which have the required initial \nposition-momentum correlations `built in', and which therefore exhibit \nthe general form \nin Eqn.~(\\ref{general_case}), including examples where the position-space\nwave packet can initially shrink in width. Since these examples can be \nanalyzed with little or no more mathematical difficulty than the standard \nminimum-uncertainty cases commonly considered in textbooks \\cite{texts}, \nwe will focus on providing two such examples below. We will, however, also \nemphasize the utility of different ways of visualizing the time-dependent \nposition-momentum correlations suggested by the form in \nEqn.~(\\ref{general_case}). \n\n\nThe derivation of Eqn.~(\\ref{general_case}) has been most often\ndiscussed \\cite{baird}, \\cite{styer} using the evaluation of the \ntime-dependence of expectation values described by \n\\begin{equation}\n\\frac{d}{dt} \\langle \\hat{A} \\rangle\n= \\frac{i}{\\hbar} \\left\\langle [\\hat{H},\\hat{A}] \\right\\rangle\n\\label{time-development}\n\\end{equation}\nusing the free particle Hamiltonian, $\\hat{H} = \\hat{p}^2\/2m$,\nor related matrix methods \\cite{nicola}; since we are interested only\nin expectation values of operators ($\\hat{A} = \\hat{x}$ or $\\hat{p}$) \nwhich are themselves\nindependent of time, there is no additional $\\langle d\\hat{A}\/dt\\rangle$ term\nin Eqn.~(\\ref{time-development}). In the next section, we\nderive the necessary time-dependent expectation values of powers of\nposition and momentum \nin a complementary way, using very general momentum-space ideas.\n(Identical methods can then also be used to evaluate the general form \nof $\\Delta x_t$ for the related case of uniform acceleration, which we\ndiscuss in Appendix~\\ref{sec:appendix}.)\nThen in Sec.~\\ref{sec:standard} we briefly review the special case of the\nminimum-uncertainty Gaussian wave packet (to establish notation) focusing\non the introduction of useful tools to help visualize possible \ncorrelations between position and momentum in free particle wave\npackets, especially the direct visualization of the real\/imaginary\nparts of $\\psi(x,t)$, the time-dependent spatial distribution of kinetic \nenergy, as well as the Wigner quasi-probability distribution. \nThen, in Sec.~\\ref{sec:correlated}, we exhibit two cases of \ncorrelated wave packets with the general form of $\\Delta x_t$\nin Eqn.~(\\ref{general_case}), which are\nsimple extensions of these standard results. A similar\nexample, involving squeezed states, has been discussed in \nRef.~\\cite{ford}, \nbut we will focus here on understanding the detailed\nposition-momentum correlations which give rise to the term linear in \n$t$ in Eqn.~(\\ref{general_case}), especially using the techniques\noutlined in Sec.~\\ref{sec:standard} for their visualization.\nFinally, we make some concluding remarks as well as noting\nin an Appendix that very similar results (both for the general form of \nthe time-dependent $\\Delta x_t$ and for the exemplary cases studied\nhere) can be obtained for the Schr\\\"{o}dinger equation corresponding to\nthe case of constant acceleration.\n\n\n\n\n\\section{Time-dependent $\\Delta x_t$ using momentum-space wavefunctions}\n\\label{sec:momentum_space}\n\nWhile the general result for the free-particle $\\Delta x_t$ is most \noften obtained using formal methods involving the time-dependence of \nexpectation values as in Eqn.~(\\ref{time-development}), \none can also evaluate time-dependent powers of position and momentum \nfor a free particle in terms of the\ninitial wave packet quite generally in terms of the momentum-space\ndescription of the quantum state, namely $\\phi(p,t)$, obtaining the\nsame results, in a manner which is nicely complementary to more standard \nanalyses. Depending on the ordering of topics in a given quantum mechanics \ncourse syllabus, this discussion might well be applicable and understandable \nearlier in the curriculum than the more formal method. \n\nIn this approach, the most general momentum-space wave function \nwhich solves the free-particle time-dependent Schr\\\"{o}dinger equation \n\\begin{equation}\n\\frac{p^2}{2m}\\phi(p,t) = \\hat{H} \\phi(p,t) = \\hat{E} \\phi(p,t)\n= i\\hbar \\frac{\\partial}{\\partial t} \\phi(p,t)\n\\, , \n\\end{equation}\ncan be written in the form \n\\begin{equation}\n\\phi(p,t) = \\phi_{0}(p)\\, e^{-ip^2t\/2m\\hbar}\n\\end{equation}\nwith $\\phi(p,0) = \\phi_{0}(p)$ being the initial state wavefunction.\nThe $t$-dependent expectation values for powers of momentum are trivial \nsince\n\\begin{eqnarray}\n\\langle \\hat{p} \\rangle_t & = & \\int_{-\\infty}^{+\\infty}\n\\, p \\, |\\phi_{0}(p)|^2\\,dp \\equiv \\langle \\hat{p} \\rangle_0 \n\\label{p_1} \\\\\n\\langle \\hat{p}^2 \\rangle_t & = & \\int_{-\\infty}^{+\\infty}\n\\, p^2 \\, |\\phi_{0}(p)|^2\\,dp \\equiv \\langle \\hat{p}^2 \\rangle_0 \n\\label{p_2}\n\\end{eqnarray}\nso that \n\\begin{equation}\n(\\Delta p_t)^2 = \\langle \\hat{p}^2\\rangle_t - \\langle \\hat{p}\\rangle_t^2\n= \n\\langle \\hat{p}^2\\rangle_0 - \\langle \\hat{p}\\rangle_0^2\n= \n(\\Delta p_0)^2\n\\end{equation}\nas expected for a free-particle solution for which \n$|\\phi(p,t)|^2 = |\\phi_{0}(p)|^2$ is independent of time. \n\nIn this representation, the position operator is given by the \nnon-trivial form $\\hat{x} = i\\hbar (\\partial\/\\partial p)$, and \nthe time-dependent \nexpectation value of position can be written as\n\\begin{eqnarray}\n\\langle \\hat{x} \\rangle_t & = &\n\\int_{-\\infty}^{+\\infty}\n[\\phi(p,t)]^{*}\\, \\hat{x}\\, [\\phi(p,t)]\\,dp \\nonumber \\\\\n& = & \n\\int_{-\\infty}^{+\\infty}\n\\left[\\phi_{0}^{*}(p)\\,e^{+ip^2t\/2m\\hbar}\\right]\n\\,\n\\left(i\\hbar \\frac{\\partial}{\\partial p}\\right)\n\\left[\\phi_{0}(p)\\,e^{-ip^2t\/2m\\hbar}\\right]\\,\ndp \\nonumber \\\\\n& = & \n\\int_{-\\infty}^{+\\infty}\n[\\phi_{0}^{*}(p)] \\left(i\\hbar \\frac{\\partial }{\\partial p}\\right)\n[\\phi_{0}(p)]\\,dp\n+ \\frac{t}{m} \\int_{-\\infty}^{+\\infty} \\, p\\, |\\phi_{0}(p)|^2\\,dp\n\\nonumber \\\\\n& = & \\langle \\hat{x} \\rangle_0 + \\frac{t}{m} \\langle \\hat{p}\\rangle_0\n\\label{x_1}\n\\end{eqnarray}\nwhich is consistent with Ehrenfest's theorem for the essentially\nclassical behavior of $\\langle \\hat{x}\\rangle_t$. \nThe same formalism can\nbe used to evaluate $\\langle \\hat{x}^2\\rangle_t$ and gives\n\\begin{equation}\n\\langle \\hat{x}^2\\rangle_t\n = \n\\langle \\hat{x}^2 \\rangle_0\n+ \\frac{t}{m} \\langle \\hat{x} \\hat{p} + \\hat{p} \\hat{x}\\rangle_0\n+ \\langle \\hat{p}^2\\rangle_0 \\frac{t^2}{m^2}\n\\label{x_2}\n\\end{equation}\nwhere one can use the general representation-independent commutation\nrelation $[\\hat{x},\\hat{p}] = i\\hbar$ to simplify the answer to this form.\nThe symmetric combination of position\nand momentum operators, written here as $(\\hat{x}\\hat{p}+\\hat{p}\\hat{x})$, \nwhich is obviously Hermitian, guarantees that this expression is manifestly \nreal.\n(Discussions in textbooks on symmetrizing products of non-commuting operators\nabound, but such results are seldom put into the context of being useful\nor natural in specific calculations, as is apparent in their use here.)\n\nCombining Eqns.~(\\ref{x_1}) and (\\ref{x_2}) then gives the most general\nform for the time-dependent spread in position to be\n\\begin{eqnarray}\n(\\Delta x_t)^2 & = & \\langle \\hat{x}^2\\rangle_t \n- \\langle \\hat{x}\\rangle_t^2 \\nonumber \\\\\n& = & \n\\left(\\langle \\hat{x}^2 \\rangle_0\n+ \\frac{t}{m} \\langle \\hat{x} \\hat{p} + \\hat{p} \\hat{x}\\rangle_0\n+ \\langle \\hat{p}^2\\rangle_0 \\frac{t^2}{m^2} \\right)\n\\nonumber \n- \\left(\\langle \\hat{x} \\rangle_0 + \\frac{t}{m} \\langle \\hat{p}\\rangle_0\\right)^2\n\\nonumber \\\\\n& = & \n(\\Delta x_0)^2 +\n\\left(\n\\langle \\hat{x} \\hat{p} + \\hat{p} \\hat{x} \\rangle_0\n- 2 \n\\langle \\hat{x} \\rangle_0 \\langle \\hat{p} \\rangle_0\n\\right)\n\\frac{t}{m}\n+ \\frac{(\\Delta p_0^2) t^2}{m^2} \n\\nonumber \\\\\n& = & \n(\\Delta x_0)^2 +\n\\left\\langle \n(\\hat{x} - \\langle \\hat{x} \\rangle_0)\n(\\hat{p} - \\langle \\hat{p} \\rangle_0) \n+ \n(\\hat{p} - \\langle \\hat{p} \\rangle_0)\n(\\hat{x} - \\langle \\hat{x} \\rangle_0)\n\\right\\rangle_0 \n\\frac{t}{m}\n+ \\frac{(\\Delta p_0^2) t^2}{m^2} \n\\, . \n\\end{eqnarray}\nWe have rewritten \nthe term linear in $t$ in a form which stresses that it is a correlation \nbetween $x$ and $p$, similar in form to related classical quantities \nsuch as the covariance in probability and statistics. Recall that for\ntwo classical quantities, $A$ and $B$, described by a joint probability \ndistribution, the covariance is defined as\n\\begin{equation}\ncov(A,B) = \n\\left \\langle\n\\left( A - \\langle A \\rangle \\right)\n\\left( B - \\langle B \\rangle \\right)\n\\right \\rangle\n= \\langle AB \\rangle - \\langle A \\rangle \\langle B \\rangle\n\\,. \n\\end{equation}\nAs we will see in the next section, there is no initial correlation for \nthe familiar minimum-uncertainty Gaussian wave packets. However, for simple \nvariations on the standard example, as in Sec.~\\ref{sec:correlated}, we will\nfind non-vanishing correlations, which we can visualize with the methods in\nSec.~\\ref{sec:standard}.\n\n\n\nWe stress that the notion of a time-dependent correlation between $x$ and \n$p$ at arbitrary times ($t>0)$ can be easily generalized from these results, \nand we can define a generalized covariance for these two variables \n\\cite{merzbacher} -- \\cite{leblond} \n(or any two operators, $\\hat{A}, \\hat{B}$) as\n\\begin{equation}\ncov(\\hat{x},\\hat{p};t) \\equiv \\frac{1}{2} \n\\left\\langle \n(\\hat{x} - \\langle \\hat{x} \\rangle_t)\n(\\hat{p} - \\langle \\hat{p} \\rangle_t) \n+ \n(\\hat{p} - \\langle \\hat{p} \\rangle_t)\n(\\hat{x} - \\langle \\hat{x} \\rangle_t)\n\\right\\rangle_t\n\\label{covariance}\n\\end{equation}\nwhere the additional factor of $1\/2$ accounts for the necessarily\nsymmetric combination which appears, compared to the classical\ndefinition. One can then speak\nof a time-dependent correlation coefficient defined by\n\\begin{equation}\n\\rho(x,p;t) \\equiv\n\\frac{cov(x,p;t)}{\\Delta x_t\\cdot \\Delta p_t}\n\\label{correlation_coefficient}\n\\end{equation}\nin analogy with related quantities from statistics. This correlation\ncan be shown \\cite{leblond} to satisfy the inequality\n\\begin{equation}\n[\\rho(x,p;t)]^2 \\leq 1 - \\left(\\frac{|\\langle [\\hat{x},\\hat{p}]\\rangle|}{2\\Delta x_t\n\\cdot \\Delta p_t}\\right)^2\n= 1- \\left(\\frac{\\hbar}{2\\Delta x_t\\cdot \\Delta p_t}\\right)^2\n\\end{equation}\nwhich vanishes for the standard minimum-uncertainty Gaussian\nat $t=0$, but which is non-zero for later times, as we will see below.\n\n\n\n\n\n\n\n\n\n\\section{Standard minimum-uncertainty Gaussian wave packets}\n\\label{sec:standard}\n\n\nThe standard initial minimum-uncertainty Gaussian wave packet, which gives \nthe familiar time-dependent spread in Eqn.~(\\ref{not_general_case}), \ncan be written in generality as \n\\begin{equation}\n\\phi_0(p) = \\phi_{(G)}(p,0) = \n\\sqrt{\\frac{\\alpha}{\\sqrt{\\pi}}}\n\\; e^{-\\alpha^2(p-p_0)^2\/2}\n\\; e^{-ipx_0\/\\hbar}\n\\label{initial_gaussian}\n\\end{equation}\nwhere $x_0,p_0$ are used to characterize the arbitrary initial central \nposition and momentum values respectively. This form gives \n\\begin{equation}\n\\langle \\hat{p} \\rangle_{t} = p_0\n\\, , \n\\qquad\n\\quad\n\\langle \\hat{p}^2 \\rangle_{t} = p_0^2 + \\frac{1}{2\\alpha^2}\n\\, ,\n\\qquad\n\\mbox{and}\n\\qquad\n\\Delta p_t = \\Delta p_0 = \\frac{1}{\\alpha \\sqrt{2}}\n\\label{momentum_results}\n\\end{equation}\nwhich are, of course, consistent with the general results in \nEqns.~(\\ref{p_1}) and (\\ref{p_2}).\n\n\nThe explicit form of the position-space wave function is given \nby Fourier transform as \n\\begin{equation}\n\\psi_{(G)}(x,t) = \\frac{1}{\\sqrt{2\\pi\\hbar}} \\sqrt{\\frac{\\alpha}{\\sqrt{\\pi}}}\n\\int_{-\\infty}^{+\\infty}\\, e^{ip(x-x_0)\/\\hbar}\\,\ne^{-\\alpha^2 (p-p_0)^2\/2}\\,\ne^{-ip^2t\/2m\\hbar}\\,dp\n\\end{equation}\nwhich can be evaluated in closed form (using the change of variables\n$q \\equiv p-p_0$ and standard integrals) to obtain\n\\begin{equation}\n\\psi_{(G)}(x,t) = \\frac{1}{\\sqrt{\\sqrt{\\pi} \\alpha \\hbar (1+it\/t_0)}}\n\\,\ne^{ip_0(x-x_0)\/\\hbar}\n\\, e^{-ip_0^2t\/2m\\hbar}\n\\,\ne^{-(x-x_0-p_{0}t\/m)^2\/2(\\alpha \\hbar)^2(1+it\/t_0)}\n\\label{free_particle_position_solution}\n\\end{equation}\nwhere $t_0 \\equiv m\\hbar \\alpha^2$ is the spreading time. \nThis then gives \n\\begin{equation}\n|\\psi_{(G)}(x,t)|^2 = \\frac{1}{\\sqrt{\\pi}\\beta_t}\n\\, e^{- [x-\\overline{x}(t)]^2\/\\beta_t^2}\n\\end{equation}\nwhere \n\\begin{equation}\n\\overline{x}(t) \\equiv x_0 + p_0t\/m\n\\qquad\n\\mbox{and}\n\\qquad\n\\beta_t \\equiv \\beta \\sqrt{1+(t\/t_0)^2}\n\\qquad\n\\mbox{with}\n\\qquad\n\\beta \\equiv \\alpha \\hbar\n\\,. \n\\end{equation}\nThis gives\n\\begin{equation}\n\\langle \\hat{x} \\rangle_t = \\overline{x}(t)\n\\quad\n\\qquad\n\\mbox{and}\n\\qquad\n\\quad\n\\langle \\hat{x}^2 \\rangle_t = [\\overline{x}(t)]^2 + \\frac{\\beta_t^2}{2},\n\\end{equation}\nso that\n\\begin{equation}\n(\\Delta x_t)^2 = \n\\frac{\\beta_t^2}{2}\n= \n\\frac{\\beta^2}{2}\n\\left(1+\\left(\\frac{t}{t_0}\\right)^2\\right)\n= (\\Delta x_0)^2 + (\\Delta p_0 t\/m)^2\n\\label{gaussian_result}\n\\end{equation}\nwhich is the familiar textbook result, and for $t=0$ has the minimum\nuncertainty product $\\Delta x_0 \\cdot \\Delta p_0 = \\hbar\/2$. \n\nIt is easy to confirm by direct calculation that there is no initial \n($t=0$) $x-p$ correlation ($cov(x,p;0)=0$) for this wavefunction, \nconsistent with\nthe lack of a term linear in $t$ in Eqn.~(\\ref{gaussian_result}). We \nemphasize that such correlations do indeed develop as the wavepacket \nevolves in time, which can be seen by examining the form of either the real or\nimaginary parts of $\\psi_{(G)}(x,t)$ as shown in Fig.~1 (where we specify\nthe model parameters used in that plot in the accompanying figure caption). \nWe note that for times $t> 0$, the `front end' of the wave packet shown \nthere is clearly more `wiggly' than the `back end' (simply count the nodes\non either side of $\\langle x \\rangle_t$.)\nThe time-dependent correlation function or covariance defined \nin Eqn.~(\\ref{covariance}) and correlation coefficient\nfrom Eqn.~(\\ref{correlation_coefficient}) are easily calculated \nfor this specific case to be\n\\begin{equation}\ncov(x,p;t) = \\frac{\\hbar}{2} \\left(\\frac{t}{t_0}\\right)\n\\qquad\n\\quad\n\\mbox{and}\n\\quad\n\\qquad\n\\rho(x,p;t) = \\frac{t\/t_0}{\\sqrt{1+(t\/t_0)^2}}\n\\label{standard_gaussian_correlations}\n\\end{equation}\nwhich clearly expresses the increasingly positive correlation of\nfast (slow) momentum components being preferentially in the leading \n(trailing) edge of the wave packet. We note that such correlations\nhave been discussed in Refs.~\\cite{bohm} and \\cite{leblond}.\n\n\n\nThis observation can also be described quantitatively by examining the \ndistribution of kinetic energy of such a free-particle Gaussian wavepacket \n\\cite{bassett}. \nIn this approach, the standard expression for the kinetic energy is \nrewritten using integration-by-parts in the form\n\\begin{equation}\n\\langle \\hat{T}\\rangle_{t}\n = \\frac{1}{2m}\\langle \\hat{p}^2\\rangle_{t}\n = -\\frac{\\hbar^2}{2m}\n\\int_{-\\infty}^{+\\infty} dx \\,\\psi^*(x,t) \\frac{\\partial ^2 \\psi(x,t)}{\\partial x^2} \n = \\frac{\\hbar^2}{2m}\\int_{-\\infty}^{+\\infty} dx \n\\left|\\frac{\\partial \\psi(x,t)}{\\partial x}\\right|^2 \n\\end{equation} \nwhich can be used to define a {\\it local kinetic energy density}, \n${\\cal T}(x,t)$, via\n\\begin{equation}\n{\\cal T}(x,t) \\equiv \n\\frac{\\hbar^2}{2m} \n\\left|\\frac{\\partial \\psi(x,t)}{\\partial x} \\right|^2\n\\qquad\n\\quad\n\\mbox{where}\n\\qquad\n\\quad\n\\langle \\hat{T} \\rangle_t = \\int_{-\\infty}^{+\\infty} {\\cal T}(x,t)\\,dx \n\\equiv T(t)\n\\, .\n\\label{kinetic_energy_distribution}\n\\end{equation}\nAs this notion is useful in systems other than for free particle states,\nwe allow for the possibility that the total kinetic energy varies with\ntime. \nSince this local density is clearly real and positive-definite, we can use it\nto visualize the distribution of kinetic energy (or `wiggliness') \nin any time-dependent wavefunction. We can then define similar quantities\nfor the kinetic energy in the `front' and\/or `back' halves of the wave\npacket, using $\\langle x\\rangle_t$ as the measuring point, via\n\\begin{equation}\nT^{(+)}(t) \\equiv \\int_{\\langle x \\rangle_t}^{+\\infty} {\\cal T}(x,t)\\,dx \n\\qquad\n\\quad\n\\mbox{and}\n\\quad\n\\qquad\nT^{(-)}(t) \\equiv \\int^{\\langle x \\rangle_t}_{-\\infty} {\\cal T}(x,t)\\,dx \n\\label{half_kinetic_energies}\n\\, . \n\\end{equation}\n\nFor the standard Gaussian wave packet in \nEqn.~(\\ref{free_particle_position_solution}), the local kinetic energy \ndensity is given by\n\\begin{equation}\n{\\cal T}_{(G)}(x,t) = \\frac{1}{2m}\n\\left( p_0^2 + \\left[\\frac{2[x-\\overline{x}(t)] p_0}{\\alpha^2\\hbar}\\right]\n\\left[\\frac{t\/t_0}{(1+t^2\/t_0^2)}\\right]\n+ \\frac{[x-\\overline{x}(t)]^2}{(\\alpha^2 \\hbar)^2 (1+t^2\/t_0^2)}\\right)\n|\\psi_{(G)}(x,t)|^2\n\\, . \n\\label{gaussian_case}\n\\end{equation}\nThe expectation value of the kinetic energy is correctly given by\n\\begin{equation}\nT_{(G)}(t) = \\int_{-\\infty}^{+\\infty}\\, {\\cal T}_{(G)}(x,t)\\,dx = \\frac{1}{2m} \n\\left(p_0^2 + \\frac{1}{2\\alpha^2}\\right)\n\\end{equation}\nand receives non-zero contributions from only the first and last terms in \nbrackets\nin Eqn.~(\\ref{gaussian_case}), since the term linear in \n$[x-\\overline{x}(t)]$ \nvanishes (when integrated over all space) for symmetry reasons. The\nindividual values of $T^{(\\pm)}_{(G)}(t)$ can also be calculated and \nare given by\n\\begin{equation}\nT^{(\\pm)}_{(G)}(t) \n= \\frac{1}{2m}\n\\left(\\frac{1}{2}\\right)\n\\left( \np_0^2 \n\\pm \n\\left(\\frac{2p_0}{\\alpha \\sqrt{\\pi}} \\right) \\frac{t\/t_0}{\\sqrt{1+t^2\/t_0^2}} \n+ \\frac{1}{2\\alpha^2}\n\\right)\n\\label{left_and_right_kinetic_energies}\n\\end{equation}\nwhich are individually positive definite. The time-dependent fractions\nof the total kinetic energy contained in the $(+)\/(-)$ (right\/left) halves \nof this standard wave packet are given by\n\\begin{equation}\nR^{(\\pm)}_{(G)}(t) \\equiv \n\\frac{T^{(\\pm)}_{(G)}(t)}{T^{(+)}_{(G)}(t) + T^{(-)}_{(G)}(t)}\n= \\frac{1}{2} \\pm \n \\left(\\frac{2}{\\sqrt{\\pi}}\\right)\n\\left( \\frac{(p_0\\alpha)}{(2(p_0\\alpha)^2+1)}\\right) \n\\frac{t\/t_0}{\\sqrt{1+t^2\/t_0^2}}\n\\label{define_r_function}\n\\,. \n\\end{equation}\nFor the model parameters used in Fig.~1, for $t=2t_0$ this corresponds \nto $R^{(+)}\/R^{(-)} = 56\\%\/44\\%$, consistent with the small, but obvious,\ndifference in the kinetic energy distribution seen by `node counting'.\n\n\n\nFinally, this growing correlation can be exhibited in yet another way, \nnamely through the Wigner quasi-probability distribution, defined by\n\\begin{eqnarray}\nP_{W}(x,p;t)\n & \\equiv &\n\\frac{1}{\\pi \\hbar}\n\\int_{-\\infty}^{+\\infty}\n\\psi^{*}(x+y,t)\\,\\psi(x-y,t)\\,e^{+2ipy\/\\hbar}\\,dy \\\\\n& = & \n\\frac{1}{\\pi \\hbar}\n\\int_{-\\infty}^{+\\infty}\n\\phi^*(p+q,t)\\, \\phi(p-q,t)\\, e^{-2ixq\/\\hbar}\\,dq\n\\label{wigner_function}\n\\, . \n\\end{eqnarray}\nThis distribution, first discussed by Wigner \\cite{wigner}, \nand reviewed extensively in the research \\cite{wigner_research}\nand pedagogical \\cite{wigner_pedagogical} literature (and even in the \ncontext of wave packet spreading \\cite{wigner_lee}),\nis as close as one can come to a quantum phase-space distribution,\nand while not directly measurable, can still be profitably used to \nillustrate any $x-p$ correlations. \nFor the standard minimum-uncertainty Gaussian wavefunctions defined by \nEqns.~(\\ref{initial_gaussian}) or (\\ref{free_particle_position_solution}), \none finds that \\cite{kim_noz}\n\\begin{equation}\nP_{W}(x,p;t) = \\frac{1}{\\hbar \\pi}\n\\, e^{-(p-p_0)^2 \\alpha^2}\n\\, e^{-(x-x_0-pt\/m)^2\/\\beta^2}\n= \nP_{W}(x-pt\/m,p;0)\n\\,.\n\\label{explicit_wigner_function}\n\\end{equation}\nContour plots of $P_{W}(x,p;t)$ corresponding to the time-dependent\nstandard Gaussian wave packet for two different times ($t=0$ and\n$t=2t_0$) are also shown at the bottom of Fig.~1, where the the \nelliptical contours with principal axes parallel to the $x,p$ \naxes for the $t=0$ case are indicative of the vanishing initial correlation, \nwhile the slanted contours at later times are consistent with the correlations\ndeveloping as described by Eqn.~(\\ref{standard_gaussian_correlations}).\n(We note that Bohm \\cite{bohm} uses a similar illustration, but discusses it \nonly in the context of classical phase space theory and Liouville's theorem.)\nThe visualization tools used in Fig.~1 (explicit plots of \n$Re[\\psi(x,t)]$, and the Wigner function) and the distribution of\nkinetic energy as encoded in Eqns.~(\\ref{left_and_right_kinetic_energies})\nor (\\ref{define_r_function}), \ncan then directly be used to examine the correlated wave packets we \ndiscuss in the next section.\n\nAs a final reminder about the quantum mechanical ``engineering''\nof model one-dimensional wavepackets, we recall that since an initial\n$\\phi_{0}(p)$ is related to the time-dependent $\\psi(x,t)$ for\nfree-particle solutions via\n\\begin{equation}\n\\psi(x,t) = \\frac{1}{\\sqrt{2\\pi\\hbar}}\\,\n\\int_{-\\infty}^{+\\infty}\\,\n\\left[\\phi_{0}(p)\\,e^{-ip^2t\/2m\\hbar}\\right]\\,e^{ipx\/\\hbar}\\,dp\n\\end{equation}\nthen the simple modification \n\\begin{equation}\n\\tilde{\\phi}_{0}(p) = \\phi_{0}(p)\\, e^{-ipa\/\\hbar}\\,e^{ip^2\\tau\/2m\\hbar}\n\\label{change_phi}\n\\end{equation}\nleads to the related position-space wavefunction satisfying\n\\begin{equation}\n\\tilde{\\psi}(x,t) = \\frac{1}{\\sqrt{2\\pi\\hbar}}\\,\n\\int_{-\\infty}^{+\\infty}\\,\n\\left[\\left(\\phi_{0}(p)\\, e^{-ipa\/\\hbar}\\,e^{ip^2\\tau\/2m\\hbar}\\right)\\,e^{-ip^2t\/2m\\hbar}\\right]\\,e^{ipx\/\\hbar}\\,dp \n = \\psi(x-a,t-\\tau)\n\\label{change_psi}\n\\end{equation}\nso that simple shifts in coordinate and time labels are possible, \nand squeezed states often make use of similar connections.\n\n\\section{Correlated Gaussian wave packets}\n\\label{sec:correlated}\n\n\\subsection{Squeezed states}\n\\label{subsec:squeezed}\n\nOne of the simplest modifications of a standard minimum-uncertainty\nGaussian initial state which induces non-trivial initial correlations\nbetween position and momentum is given by\n\\begin{equation}\n\\phi_{(S)}(p,0) = \n\\sqrt{\\frac{\\alpha}{\\sqrt{\\pi}}}\n\\; e^{-\\alpha^2(p-p_0)^2(1+iC)\/2}\n\\; e^{-ipx_0\/\\hbar}\n\\label{initial_squeezed}\n\\,. \n\\end{equation}\n(A similar version of a squeezed state, but with $\\psi(x,0)$ modified,\nhas been discussed in Ref.~\\cite{ford}.) Because the additional $C$ term\nis a simple phase, the modulus of $\\phi(p,t)$ is unchanged so that\nthe expectation values of momentum, $\\langle \\hat{p}\\rangle_0$ and\n$\\langle \\hat{p}^2 \\rangle_0$, and the momentum-spread, are still given \nby Eqn.~(\\ref{momentum_results}) as for the standard Gaussian example.\nHowever, there is now an obvious coupling between the usual `smooth'\n$\\exp(-\\alpha^2(p-p_0)^2\/2)$ term which describes the peak momentum values\nand the `oscillatory' $\\exp(-ipx_0\/\\hbar)$ terms which dictates the\nspatial location and spread, governed by the presence of the new $C$ term, \nwhich leads to a non-zero initial $x-p$ correlation.\n\n\nThe time-dependent position-space wavefunction is obtained via Fourier\ntransform with literally no more work than for the standard Gaussian and\none finds\n\\begin{equation}\n\\psi_{(S)}(x,t) = \\frac{1}{\\sqrt{\\sqrt{\\pi} \\beta (1+i[C+t\/t_0])}}\n\\,\ne^{ip_0(x-x_0)\/\\hbar}\n\\, e^{-ip_0^2t\/2m\\hbar}\n\\,\ne^{-(x-x_0-p_{0}t\/m)^2\/2\\beta^2(1+i[C+t\/t_0])}\n\\label{squeezed_position}\n\\end{equation}\ngiving \n\\begin{equation}\n|\\psi_{(S)}(x,t)|^2\n= \\frac{1}{\\sqrt{\\pi}b(t)}\n\\, e^{-[x-\\overline{x}(t)]^2\/b^2(t)}\n\\qquad\n\\mbox{where}\n\\qquad\nb(t) \\equiv \\beta \\sqrt{1+(C+t\/t_{0})^2}\n\\,. \n\\end{equation}\nThus, the initial state in Eqn.~(\\ref{initial_squeezed}) gives the same \ntime-dependent Gaussian behavior as the standard case, still peaked at \n$x=\\overline{x}(t)$, but with a spatial width shifted in time from \n$t \\rightarrow t + Ct_0$. This can be understood from the results in \nEqns.~(\\ref{change_phi}) and (\\ref{change_psi}) where the new\n$C$-dependent terms in Eqn.~(\\ref{initial_squeezed}) give rise to \neffective $a$ and $\\tau$ shifts given by \n\\begin{equation}\na = -C\\alpha^2 \\hbar p_0\n\\qquad\n\\quad\n\\mbox{and}\n\\qquad\n\\quad\n\\tau = - C\\alpha^2m\\hbar = - Ct_0\n\\, . \n\\end{equation}\nThe $\\tau$ shift then affects the time-dependent width, $b(t)$, but the \ncombined $a,\\tau$ shifts undo each other in the argument of the Gaussian\nexponential because they are highly correlated due to the form in\nEqn.~(\\ref{initial_squeezed}).\n\nThe time-dependent position expectation values are then\n\\begin{equation}\n\\langle \\hat{x}\\rangle_t = \\overline{x}(t)\n\\qquad\n\\quad\n\\mbox{and}\n\\qquad\n\\quad\n\\langle \\hat{x}^2 \\rangle_t = [\\overline{x}(t)]^2 + \\frac{[b(t)]^2}{2},\n\\end{equation}\nso that\n\\begin{eqnarray}\n(\\Delta x_t)^2 = \\frac{[b(t)]^2}{2} \n& = & \n\\frac{\\beta^2}{2} \\left(1 + (C+t\/t_0)^2\\right) \\nonumber \\\\\n& =& \n\\frac{\\beta^2}{2}(1+C^2) + C\\beta^2\\frac{t}{t_0} + \\frac{\\beta^2 t^2}{2t_0^2}\n\\nonumber \\\\\n& = & (\\Delta x_0)^2 + At + \\frac{(\\Delta p_0)^2 t^2}{m^2}\n\\label{squeezed_spread}\n\\end{eqnarray}\nwhich has a non-vanishing linear term if $C\\neq 0$. The initial width\nof this packet is larger than for the minimal uncertainty solution\nby a factor of $\\sqrt{1+C^2}$, but has the same quadratic time-dependence\nsince $\\Delta p_0$ is the same.\n\nOne can confirm by direct calculation that $\\phi_{(S)}(p,0)$ and \n$\\psi_{(S)}(x,0)$ both do \n have an initial non-vanishing\ncorrelation leading to this form and this is also clear from plots of the\ninitial wave packet as shown in Fig.~2. We plot there an example with the\nsame model parameters as in Fig.~1, but with $C=-2$ which leads to an\nanti-correlation (since $C<0$) with higher momentum components (more wiggles) \nin the `back edge' of the initial packet. This gives an intuitive\nexpectation for a wave packet which\ninitially shrinks in time, consistent with the result in \nEqn.~(\\ref{squeezed_spread}), and with the plot shown in Fig.~2 for\n$t=2t_0$. The parameters were chosen such that for this time the initial\ncorrelation has become `undone', leading to something like the standard\nGaussian initial state, from which point it spreads in a manner which is\nmore familiar. The initial correlation is achieved, however, \nat the cost of increasing the initial uncertainty principle product\nby a factor of $\\sqrt{1+C^2}$. \nThe complete time-dependent correlation coefficient\nfrom Eqn.~(\\ref{correlation_coefficient}) is \n\\begin{equation}\n\\rho(x,p;t) = \\frac{(C+t\/t_0)}{\\sqrt{1+(C+t\/t_0)^2}}\n\\end{equation}\ncorresponding in this case to a roughly $90\\%$ initial correlation. \nThe required initial correlation is also clearly evident\nfrom the Wigner quasi-probability distribution for this case, where we\nfind \n\\begin{equation}\nP_{W}(x,p;t) = \\frac{1}{\\hbar \\pi}\n\\, e^{-(p-p_0)^2 \\alpha^2}\n\\, e^{-(x-x_0-pt\/m - C(p-p_0)t_0\/m)^2\/\\beta^2}\n\\,.\n\\end{equation}\nIn this case, the initial correlation for $C<0$ shown in Fig.~2 is\nconsistent with the desired anti-correlation, since the slope of the\nelliptical contours is negative.\n\n\nIn a very similar manner, the expressions for the kinetic energy\ndensity distribution from Eqn.~(\\ref{define_r_function}) are simply \nshifted to\n\\begin{equation}\nR^{(\\pm)}_{(S)}(t) \\equiv \\frac{T^{(\\pm)}_{(S)}(t)}{T^{(+)}_{(S)}(t) + T^{(-)}_{(S)}(t)}\n= \\frac{1}{2} \\pm \n \\left(\\frac{2}{\\sqrt{\\pi}}\\right)\n\\left( \\frac{(p_0\\alpha)}{(2(p_0\\alpha)^2+1)}\\right) \n\\frac{(C+t\/t_0)}{\\sqrt{1+(C+t\/t_0)^2}}\n\\end{equation}\nso that for $C<0$, there is an initial asymmetry in the front\/back kinetic\nenergy distribution, with more `wiggles' in the trailing half of the\npacket. For the $C=-2$ case in Fig.~2, the initial ($t=0$)\nfront\/back asymmetry is $R^{(+)}\/R^{(-)} = 44\\%\/56\\%$.\n\nWe note that while a number of quantities (time-dependent spread in position,\ncorrelation coefficient, kinetic energy distribution) are simply obtained \nby the $t \\rightarrow t + Ct_0$ shift, other important metrics, such as the\nautocorrelation function \\cite{bassett_2}, $A(t)$, retain basically \nthe same form.\n\nOne can imagine generating initial Gaussian states with non-zero \ncorrelations of the type in Eqn.~(\\ref{initial_squeezed}), motivated \nby results obtained by the use of modern atom trapping techniques, \nsuch as in Ref.~\\cite{meekhof}. In a number of such experiments, \nharmonically bound ions are cooled to essentially their ground state,\nafter which changes in the external binding potential can generate\nvarious {\\it nonclassical motional states} such as coherent states\n(by sudden shifts in the central location of the binding potential\n\\cite{heinzen}) and squeezed states (by changing the strength of the\nharmonic binding force, i.e., the spring constant). The subsequent \ntime-development of Gaussian packets in such states can then lead to\nthe desired correlated states, at which point the external binding\npotential can be suddenly removed, with free-particle propagation\nthereafter. \n\nAs an example, the initial state in a harmonic oscillator potential\nof the form $V(x) = m\\omega^2x^2\/2$ given by\n\\begin{equation}\n\\psi(x,0) = \\frac{1}{\\sqrt{\\beta \\sqrt{\\pi}}}\n\\, e^{ip_0x\/\\hbar}\\,e^{-x^2\/2\\beta^2}\n\\end{equation}\nevolves in time as \\cite{bassett}\n\\begin{equation}\n\\psi(x,t) = \n\\exp\n\\left[\n\\frac{im\\omega x^2 \\cos(\\omega t)}{2\\hbar \\sin(\\omega t)}\n\\right]\n\\frac{1}{\\sqrt{A(t) \\sqrt{\\pi}}}\n\\exp\n\\left[ \n-\\frac{i m \\omega \\beta}{2\\hbar \\sin(\\omega t)}\n\\frac{(x-x_s(t))^2}{A(t)}\n\\right]\n\\label{position_space_sho_solution}\n\\end{equation}\nwhere\n\\begin{equation}\nA(t) \\equiv \\beta \\cos(\\omega t) + i \\left(\\frac{\\hbar}{m \\omega \\beta}\n\\right) \\sin(\\omega t)\n\\qquad\n\\mbox{and}\n\\qquad\nx_s(t) \\equiv \\frac{p_0 \\sin(\\omega t)}{m \\omega}\n\\, .\n\\end{equation}\nThe time-dependent expectation values are then\n\\begin{equation}\n\\langle x\\rangle_t = x_s(t)\n\\, ,\n\\qquad\n\\Delta x_t = \\frac{|A(t)|}{\\sqrt{2}}\n\\, ,\n\\qquad\n\\mbox{and}\n\\qquad\n\\langle p \\rangle_t = p_0\\cos(\\omega t)\n\\end{equation}\nand it is then easy to show that the time-dependent correlation of this\nstate is given by\n\\begin{equation}\ncov(x,p;t) = \\frac{m\\omega \\sin(\\omega t)\\cos(\\omega t)}{2}\n\\left[\n\\left(\n\\frac{\\hbar}{m\\omega \\beta}\\right)^2 - \\beta^2\n\\right]\n\\,.\n\\end{equation}\nFor the special case of coherent states, where $\\beta = \\sqrt{\\hbar\/m\\omega}$,\nthe correlations vanish identically for all times (as does the asymmetry in \nkinetic energy \\cite{bassett}), while for more general solutions, removing \nthe potential at times other than integral multiples of $\\tau\/2$ (where \n$\\tau$ is the classical period) would yield an initially correlated Gaussian.\n\n\n\n\n\n\n\n\\subsection{Linear combinations of Gaussian solutions}\n\\label{subsec:linear_combination}\n\nOne of the simplest examples of correlated position-momentum behavior of\na system, leading to an initial shrinking of a spatial width, can be\nclassically modelled by two 1D non-interacting particles, with the faster \nparticle placed initially behind the slower one. A quantum mechanical \nsolution of the free-particle \nSchr\\\"{o}dinger equation involving simple Gaussian forms which mimics this quasi-classical behavior, and for which all expectation values and correlations\ncan be evaluated in simple closed form, consists of a linear combination\nof two minimal-uncertainty Gaussian solutions of the form\n\\begin{equation}\n\\psi_{2}(x,t) = N\\left[\n\\cos(\\theta) \\psi_{(G)}^{(A)}(x,t)\n+ \n\\sin(\\theta) \\psi_{(G)}^{(B)}(x,t)\n\\right]\n\\label{two_gaussians}\n\\end{equation}\nwhere $A,B$ correspond to two different sets of initial position and \nmomentum parameters, namely $(x_A,p_A)$ and $(x_B,p_B)$, $\\theta$ describes\nthe relative weight of each component, and $N$ is an overall normalization;\nwe assume for simplicity that each component Gaussian has the same initial\nwidth, $\\beta$.\nSince each $\\psi_{(G)}(x,t)$ is separately normalized, the value of $N$\ncan be easily evaluated using standard Gaussian integrals with the result\nthat\n\\begin{equation}\nN^{-2} = \n1 \n+\n\\sin(2\\theta)\n\\;\ne^{-(x_A-x_B)^2\/4\\beta^2\n- (p_A-p_B)^2\\beta^2\/4\\hbar^2}\n\\cos[(x_B-x_A)(p_B+p_A)\/2\\hbar]\n\\end{equation}\nso that if the two initial Gaussians are far apart in phase space, namely if \n\\begin{equation}\n\\frac{(x_A-x_B)^2}{4\\beta^2}\n+ \n\\frac{(p_A-p_B)^2\\beta^2}{4\\hbar^2}\n>> 1\n\\, , \n\\end{equation}\nthe normalization factor $N$ can be effectively set to unity, and \nall cross-terms in the evaluation of expectation values can also \nbe neglected. \n\nIn this limit, the various initial expectation values required for the\nevaluation of the time-dependent spread in Eqn.~(\\ref{general_case}) are \ngiven by \n\\begin{eqnarray}\n\\langle \\hat{x} \\rangle_0 & = & \\cos^2(\\theta) x_A + \\sin^2(\\theta) x_B\n\\\\\n\\langle \\hat{x}^2 \\rangle_0 & = &\n\\cos^2(\\theta) \\left(x_A^2 + \\frac{\\beta^2}{2}\\right)\n+ \n\\sin^2(\\theta) \\left(x_B^2 + \\frac{\\beta^2}{2}\\right)\n- \\left[\\cos^2(\\theta) x_A + \\sin^2(\\theta) x_B\\right]^2\n\\end{eqnarray}\nso that\n\\begin{equation}\n(\\Delta x_0)^2 = \n[\\sin(2\\theta)]^2 \\left(\\frac{x_A-x_B}{2}\\right)^2\n+ \\frac{\\beta^2}{2}\n\\end{equation}\nwith a similar result for the momentum-spread, namely\n\\begin{equation}\n(\\Delta p_0)^2 = \n[\\sin(2\\theta)]^2 \\left(\\frac{p_A-p_B}{2}\\right)^2\n+ \\frac{\\hbar^2}{2\\beta^2}\\,.\n\\end{equation}\nThe necessary initial correlation is given by\n\\begin{equation}\n\\langle \\hat{x}\\hat{p} + \\hat{p}\\hat{x} \\rangle_0 - \n2\\langle \\hat{x}\\rangle_0 \\langle \\hat{p} \\rangle_0\n= \n2 [\\sin(2\\theta)]^2 \\left[\\frac{(x_A-x_B)(p_A-p_B)}{4}\\right]\n\\end{equation}\nso that the time-dependent spread in position is given by\n\\begin{equation}\n(\\Delta x_t)^2 =\n[\\sin(2\\theta)]^2 \n\\left[\n\\left(\\frac{x_A-x_B}{2}\\right)\n+ \\left(\\frac{p_A-p_B}{2}\\right)\\frac{t}{m}\n\\right]^2\n+\n\\frac{\\beta^2}{2}\n+\n\\frac{\\hbar^2 t^2}{2m^2\\beta^2}\n\\,. \n\\end{equation}\nIn the limit we're considering, namely when $|x_A-x_B| >> \\beta$\nand\/or $|p_A-p_B| >> \\hbar\/\\beta$, the time-dependent width can be dominated\nby the quasi-classical value dictated by two well-separated `lumps' of\nprobability, and if $(x_A-x_B)$ and $(p_A-p_B)$ have opposite signs, then this\nlarge position spread can initially decrease in time because of the\ninitial correlations. This example, while not as `quantum mechanical' as\nthat in Sec.~\\ref{subsec:squeezed}, does clearly and simply exhibit the \nposition-momentum correlations necessary for the presence of the $A$ term \nin Eqn.~(\\ref{squeezed_spread}), with the `fast one in the \nback, and the slow one in the front'.\n\nOne can imagine producing linear combinations of isolated, but highly \ncorrelated, Gaussian wave packets at very different points in phase space, \nby invoking the dynamical time-evolution of bound state wave packets which \nleads to the phenomenon of wave packet revivals, especially\nfractional revivals \\cite{revivals}. For the idealized case of the\ninfinite square well potential \\cite{aronstein}, \nat $t=T_{rev}\/4$ (where $T_{rev}$ is\nthe full revival time), an initially localized wave packet is 'split'\ninto two smaller copies of the original packet, located at opposite\nends of phase space \\cite{belloni}, of the form in Eqn.~(\\ref{two_gaussians}).\nIf, in this model system, the infinite wall boundaries are suddenly\nremoved at such a point in time, \nwe then have the case considered in this section.\n\n\n\n\\section{Conclusion and discussion}\n\\label{sec:conclusion}\n\n\nThe study of the time-dependence of the spatial width of wave packets\nin model systems can produce many interesting results, a number of which\nare quasi-classical in origin, while some are explicitly quantum mechanical.\nTime-dependent wave packet solutions of the Schr\\\"{o}dinger equation for\nthe harmonic oscillator are easily shown to exhibit intricate correlated\nexpansion\/contraction of widths in position- and momentum-space \n\\cite{saxon} and modern experiments \\cite{meekhof}, \\cite{heinzen} \ncan probe a wide variety of such states. \nEven the behavior of otherwise free Gaussian wavepackets \ninteracting with (or `bouncing from') an infinite wall \n\\cite{doncheski_1}, \\cite{dodonov}, \\cite{doncheski_2}\ncan lead to wave packets which temporarily shrink in size. \n\n\n\nWhile the fact that free-particle wavepackets can also exhibit \ninitial shrinking of their spatial width is well-known in the\nphysics pedagogical literature, it is perhaps not appreciated enough \nin the context of introductory quantum mechanics courses because of the \nseeming lack of simple, mathematically tractable, and intuitively\nvisualizable examples, and we have provided two such simple cases here. \nWe have also emphasized the usefulness of several tools for the detailed \nanalysis of the structure of quantum states as they evolve, namely the direct\nvisualization of the real\/imaginary part of the spatial wavefunction, the\ntime-dependent spatial distribution of the kinetic energy (how the\n`wiggliness' changes in time), and the Wigner quasi-probability\ndistribution all of which provide insight into\nthe correlated $x-p$ structure of quantum states.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendices}\n\\input{supp\/combinedES}\n\\input{supp\/supp_meta}\n\\input{supp\/supp_modeldetail}\n\\input{supp\/supp_plot}\n\\input{supp\/supp_stimuli}\n\\input{supp\/code}\n\n\n\\newpage\n\n\n\\section*{Broader Impact}\nOutputs of neural language models trained on natural language expose their users to stereotypes and biases learned by such models. CEAT is a tool for analysts and researchers to measure social biases in these models, which may help develop bias mitigation methods for neural language models. On the other hand, some users might utilize CEAT to detect certain biases or harmful stereotypes and accordingly target social groups by automatically generating large-scale biased text. Some users might generate and share biased content to shift public opinion as part of information influence operations. By focusing on the attitude bias measured by valence, a malicious actor might figure out ways to automatically generate hate speech while targeting certain social groups. \n\nIn addition to the improper use of CEAT, another ethical concern is about IBD and UIBD: \nIBD and UIBD can detect stereotypical associations for an intersectional group, but the detected words may be used in the generation of offensive content that perpetuates or amplifies existing biases.\nUsing the biased outputs of these neural language models leads to a feedback cycle when machine generated biased text ends up in training data contributing to perpetuating or amplifying bias.\n\n\\fi\n\\section{Introduction}\n\\label{sec:intro}\n\nState-of-the-art off-the-shelf neural language models such as the multi-million dollar GPT-3, associates men with competency and occupations demonstrating higher levels of education, in downstream natural language processing (NLP) tasks such as sequence prediction \\cite{brown2020language}. When GPT-3's user interface for academic access is prompted for language generation with the input ``What is the gender of a doctor,'' the first answer is ``A: Doctor is a masculine noun;'' whereas when prompted with ``What is the gender of a nurse,'' the first answer is ``It's female.'' Propagation of social group bias in NLP applications such as automated resume screening, that shapes the workforce by making consequential decisions about job candidates, would not only perpetuate existing biases but potentially exacerbate harmful bias in society to affect future generations \\cite{de2019bias, raghavanchallenges}. To enhance transparency in NLP, we use the representations of words learned from word co-occurrence statistics to discover social biases.\nOur methods uncover unique intersectional biases associated with individuals that are members of multiple minority groups. After identifying these emergent biases, we use numeric representations of words that vary according to neighboring words to analyze how prominent bias is in different contexts. Recent work has shown that human-like biases are embedded in the statistical regularities of language that are learned by word representations, namely word embeddings \\cite{caliskan2017semantics, blodgett2020language}. We build a method on this work to automatically identify intersectional biases, such as the ones associated with African American and Mexican American women from static word embeddings (SWE). Then, we measure how human-like biases manifest themselves in contextualized word embeddings (CWE), which are dynamic word representations generated by neural language models that adapt to their context. \n\n\n\n\\iffalse\nWhat is the gender of a doctor?\nA: Doctor is a masculine noun.\nWhat is the gender of a doctor?\nIs it a man or a woman?\nMost would say a man; a few would say a woman.\n\n\nWhat is the gender of a nurse? \\\\\nIt's female.\nWhat is the gender of an actor?\nIt's male.\nWhat is the gender of a writer?\nIt's female.\nWhat is the gender of a pilot?\nIt's male.\n\\fi\n\n \n\n\n\n Artificial intelligence systems are known not only to perpetuate social biases, but they may also amplify existing cultural assumptions and inequalities \\cite{campolo2017ai}. While most work on biases in word embeddings focuses on a single social category (e.g., gender, race) \\citep{caliskan2017semantics, bolukbasi2016man, garg2018word,zhao2018learning,gonen2019lipstick}, the lack of work on identifying intersectional biases, the bias associated with populations defined by multiple categories \\citep{cabreradiscovery}, leads to an incomplete measurement of social biases \\citep{hancock2007multiplication,hurtado2008more}. For example, \\citet{caliskan2017semantics}'s Word Embedding Association Test (WEAT) quantifies biases documented by the validated psychological methodology of the Implicit Association Test (IAT) \\citep{greenwald1998measuring, greenwald2003understanding}. The IAT provides the sets of words to represent social groups and attributes to be used while measuring bias. Consequently, the analysis of bias via WEAT is limited to the types of IATs and their corresponding words contributed by the IAT literature, which happens to include intersectional representation for only African American women. To overcome these constraints of WEATs, we extend WEAT to automatically identify attributes associated with individuals that are members of more than one social group. While this allows us to discover emergent intersectional biases, it is also a promising step towards automatically identifying all biased associations embedded in the regularities of language. To fill the gap in understanding the complex nature of intersectional bias, we develop a method called Intersectional Bias Detection (IBD) to automatically identify intersectional biases without relying on pre-defined attribute sets from the IAT literature.\n\n\n\n\n\nBiases associated with intersectional group members contain emergent elements that do not overlap with the biases of their constituent minority identities \\citep{ghavami2013intersectional,arrington201513}.\n For example, \"hair weaves\" is stereotypically associated with African American females but not with African Americans or females.\nWe extend IBD and introduce a method called Emergent Intersectional Bias Detection (EIBD) to identify the emergent intersectional biases of an intersectional group in SWE. Then, we construct new tests to quantify these intersectional and emergent biases in CWE.\nTo investigate the influence of different contexts, we use a fill-in-the-blank task called masked language modeling. The goal of the task is to generate the most probable substitution for the [MASK] that is surrounded with neighboring context words in a given sentence. BERT, a widely used language model trained on this task, substitutes [MASK] in ``Men\/women \\textit{excel} in [MASK].'' with ``science'' and ``sports'', reflecting stereotype-congruent associations. However, when we feed in similar contexts ``The man\/woman is \\textit{known} for his\/her [MASK],'' BERT fills ``wit'' in both sentences, which indicates gender bias may not appear in these contexts. Prior methods use templates analogous to masked language modeling to measure bias in CWE \\citep{may2019measuring,tan2019assessing,kurita2019quantifying}. The templates are designed to substitute words from WEAT's sets of target words and attributes in a simple manner such as \"This is [TARGET]\" or \"[TARGET] is a [ATTRIBUTE]\"\nIn this work, we propose the Contextualized Embedding Association Test (CEAT), a test eschewing templates and instead generating the distribution of effect magnitudes of biases in different contexts from a control corpus. To comprehensively measure the social and intersectional biases in this distribution, a random-effects model designed to combine effect sizes of similar bias interventions summarizes the overall effect size of bias in the neural language model \\citep{dersimonian2007random}. As a result, instead of focusing on biases template-based contexts, CEAT measures the distribution of biased associations in a language model.\n\n\n\\noindent \\textbf{Contributions.} In summary, this paper presents three novel contributions along with three complementary methods (CEAT, IBD, and EIBD) to automatically identify intersectional biases as well as emergent intersectional biases in SWE, then use these findings to measure all available types of social biases in CWE. We find that ELMo is the most biased, followed by BERT, then GPT, with GPT-2 being the least biased. The overall level of bias correlated with how contextualized the CWE generated by the models are. Our results indicate that the strongest biased associations are embedded in the representations of intersectional group members such as African American women. Data, source code, and detailed results are available.\n\n\\noindent \\textbf{Intersectional Bias Detection (IBD).} We develop a novel method for SWE to detect words that represent biases associated with intersectional group members. To our knowledge, IBD is the first algorithmic method to automatically identify individual words that are strongly associated with intersectionality. IBD reaches an accuracy of 81.6\\% and 82.7\\%, respectively, when evaluated on intersectional biases associated with African American females and Mexican American females that are provided in \\citet{ghavami2013intersectional}'s validation dataset. In these machine learning settings, the random chances of correct identification are 14.3\\% and 13.3\\%. Currently, the validation datasets represent gender as a binary label. Consequently, our method uses binary categorization when evaluating for gender related biases. However, we stress that our method generalizes to multiple categories from binary. In future work, we aim to design non-categorical methods that don't represent individuals as members of discrete categories compared to potentially using continuous representations. Accordingly, we also plan to compile validation datasets that won't constrain our evaluation to categorical assumptions about humans.\n \n \\noindent \\textbf{Emergent Intersectional Bias Detection (EIBD).} We contribute a novel method to identify emergent intersectional biases that do not overlap with biases of constituent social groups in SWE. To our knowledge, EIBD is the first algorithmic method to detect the emergent intersectional biases in word embeddings automatically. EIBD reaches an accuracy of 84.7\\% and 65.3\\%, respectively, when validating on the emergent intersectional biases of African American females and Mexican American females that are provided provided in \\citet{ghavami2013intersectional}'s validation dataset. In these machine learning settings, the random chances of correct identification are 9.2\\% and 6.1\\%. \n\n \n\n \\noindent \\textbf{Contextualized Embedding Association Test (CEAT).} WEAT measures human-like biases in SWE. We extend WEAT to the dynamic setting of neural language models to quantify the distribution of effect magnitudes of social and intersectional biases in \\textit{contextualized} word embeddings and summarize the combined magnitude of bias by pooling effect sizes with the validated random-effects methodology \\cite{hedges1983random, borenstein2007meta}. We show that the magnitude of bias greatly varies according to the context in which the stimuli of WEAT appear. Overall, the pooled mean effect size is statistically significant in all CEAT tests including intersectional bias measurements and all models contain biased representations.\n\n\n \\iffalse\nThe remaining parts of the paper are organized as follows.\nSection~\\ref{sec:related} reviews the related work. \nSection~\\ref{sec:data} provides the details of the datasets used in the approach and evaluation.\nSection~\\ref{sec:approach} introduces the three complementary methods.\nSection~\\ref{sec:experiments} gives the details of experiments and results. Section~\\ref{sec:discussion} discusses our findings and results. Section~\\ref{sec:conclusion} concludes the paper.\n\\fi\n\\section{Problem Statement}\n\\label{sec:problem}\nIn this work, we consider an analyst interested in human-like biases in word embeddings.\nDepending on the context, the analyst's goal might be measuring the biases in CWEs with pre-defined target and attribute words or detecting the intersection-related biases in static word embeddings with pre-defined target groups and a set of possible attributes to be detected. \n\n\nIn the first case, for each word of the stimuli, the analyst needs to obtain several sentences containing it and generate corresponding CWEs. \n The analyst proceeds by randomly picking a CWE vector for each word in stimuli and calculating the effect magnitude of bias based on WEAT test each time, and subsequently deriving a sampling distribution of the effect magnitudes. This distribution can be used to construct summary statistics and to test hypothesis to measure the biases in CWEs.\n \n In the second case, the analyst needs to obtain the static word embeddings of the stimuli. The detection model can be viewed as a two-class classifier with the pre-defined threshold of bias score. The model will then calculate a bias score for each attribute and classify the attributes based on the bias score.\n\n\n\\fi\n\\section{Related Work}\n\\label{sec:related}\nSWE are trained on word co-occurrence statistics of corpora to generate numeric representations of words so that machines can process language \\citep{mikolov2013distributed,pennington2014glove}. Previous work on bias in SWE has shown that human-like biases that have been documented by the IAT are embedded in the statistical regularities of language \\citep{caliskan2017semantics}. The IAT \\citep{greenwald1998measuring} is a widely used measure of implicit bias in human subjects that quantifies the differential reaction time to pairing two concepts. Analogous to the IAT, \\citet{caliskan2017semantics} developed the WEAT to measure the biases in SWE by quantifying the relative associations of two sets of target words (e.g., African American and European American) that represent social groups with two sets of polar attributes (e.g., pleasant and unpleasant). WEAT computes an effect size (Cohen's $d$) that is a standardized bias score and its $p$-value based on a one-sided permutation test. WEAT measures biases pre-defined by the IAT such as racism, sexism, ableism, and attitude towards the elderly, as well as widely shared non-discriminatory non-social group associations. \\citet{swinger2019biases} presented an adaptation of the WEAT to identify biases associated with clusters of names.\n\nRegarding the biases of intersectional groups categorized by multiple social categories, there is prior work in the social sciences focusing on the experiences of African American females \\citep{crenshaw1989demarginalizing,hare1988meaning, kahn1989psychology,thomas1995psychology}. Buolamwini et al. demonstrated intersectional accuracy disparities in commercial gender classification in computer vision \\citep{buolamwini2018gender}. \\citet{may2019measuring} and \\citet{tan2019assessing} used the attributes presented in \\citet{caliskan2017semantics} to measure emergent intersectional biases of African American females in CWE. We develop the first algorithmic method to automatically identify intersectional bias and emergent bias attributes in SWE, which can be measured in both SWE and CWE. Furthermore, we construct new embedding association tests for the intersectional groups. As a result, our work is the first to discuss biases regarding Mexican American females in word embeddings. \\citet{ghavami2013intersectional} used a free-response procedure in human subjects to collect words that represent intersectional biases. They show that emergent intersectional biases exist in several gender-by-race groups in the U.S. We use the validation dataset constructed by \\citet{ghavami2013intersectional} to evaluate our methods.\n\n\nRecently, neural language models, which use neural networks to assign probability values to sequences of words, have achieved state-of-the-art results in NLP tasks with their dynamic word representations, CWE \\citep{edunov2018understanding,bohnet2018morphosyntactic,yang2019xlnet}. Neural language models typically consist of an encoder that generates CWE for each word based on its accompanying context in the input sequence. Specifically, the collection of values on a particular layer's hidden units forms the CWE \\citep{tenney2019you}, which has the same vector shape as a SWE. However, unlike SWE that represent each word, including polysemous words, with a fixed vector, CWE of the same word vary according to its context window that is encoded into its representation by the neural language model. \\citet{ethayarajh2019understanding} demonstrate how these limitations of SWE impact measuring gender biases. With the wide adaption of neural language models \\citep{edunov2018understanding,bohnet2018morphosyntactic,yang2019xlnet}, human-like biases were observed in CWE \\citep{kurita2019quantifying,zhao2019gender,may2019measuring,tan2019assessing}.\n To measure human-like biases in CWE, \\citet{may2019measuring} applied the WEAT to contextualized representations in template sentences. \\citet{tan2019assessing} adopted the method of \\citet{may2019measuring} by applying \\citet{caliskan2017semantics}'s WEAT to the CWE of the stimuli tokens in templates such as ``This is a [TARGET]''. \\citet{kurita2019quantifying} measured biases in BERT based on the prediction probability of the attribute in a template that contains the target and masks the attribute, e.g., [TARGET] is [MASK].\n \\citet{hutchinson2020social} reveal biases associated with disabilities in CWE and demonstrate undesirable biases towards mentions of disability in applications such as toxicity prediction and sentiment analysis. \n\n\n\n\n\n\\citet{nadeem2020stereoset} present a large-scale natural language dataset in English to measure stereotypical biases in the domains of gender, profession, race, and religion. Their strategy cannot be directly compared to ours since it is not aligned with our intersectional bias detection method, which is complementary to CEAT.\n The majority of prior work measures bias in a limited selection of contexts to report the unweighted mean value of bias magnitudes, which does not reflect the scope of contextualization of biases embedded in a neural language model.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Data}\n\\label{sec:data}\nIdentifying and measuring intersectional and social biases in word embeddings as well as neural language models requires four types of data sources that are detailed in this section. (1) SWE carry the signals for individual words that have statistically significant biased associations with social groups and intersectionality. Application of our methods IBD and EIBD to SWE automatically retrieves biased associations. (2) CWE extracted from sentence encodings of neural language models provide precise word representations that depend on the context of word occurrence. We apply CEAT to summarize magnitude of bias in neural language models. (3) A corpus provides the samples of sentences used in CEAT when measuring the overall bias and analyzing the variance of contexts in CWE of neural language models. (4) Stimuli designed by experts in social psychology represent validated concepts in natural language including social group and intersectional targets in addition to their corresponding attributes.\n\n\\subsection{Static Word Embeddings (SWE)}\nWe use GloVe \\cite{pennington2014glove} SWE trained on the word co-occurrence statistics of the Common Crawl corpus to automatically detect words that are highly associated with intersectional group members. The Common Crawl corpus consists of 840 billion tokens and more than 2 million unique vocabulary words collected from a crawl of the world wide web. Consequently, GloVe embeddings capture the language representation of the entire Internet population that contributed to its training corpus. GloVe embeddings learn fine-grained semantic and syntactic regularities \\cite{pennington2014glove}. \\citet{caliskan2017semantics} have shown that social biases are embedded in the linguistic regularities learned by GloVe.\n\n\n\n\n\\subsection{Contextualized Word Embeddings (CWE)}\n\nWe generate the CWE by widely used neural language model implementations of ELMo from \\url{https:\/\/allennlp.org\/elmo}, BERT, GPT and GPT-2 from \\url{https:\/\/huggingface.co\/transformers\/v2.5.0\/model_doc\/} \\cite{peters2018deep,devlin2018BERT,radford2018improving,radford2019language}. Specifically, CWE is formed by the collection of values on a particular layer's hidden units in the neural language model. BERT, GPT and GPT-2 use subword tokenization.\nSince GPT and GPT-2 are unidirectional language models, CWE of the last subtokens contain the information of the entire word \\cite{radford2019language}. We use the CWE of the last subtoken in the word as its representation in GPT and GPT-2. For consistency, we use the CWE of the last subtoken in the word as its representation in BERT.\nBERT and GPT-2 provide several versions. We use BERT-small-cased and GPT-2-117m trained on cased English text. The sizes of the training corpora detailed below have been verified from \\citet{assenmacher2020comparability}. We obtained academic access to GPT-3's API which does not provide training data or the CWE. Accordingly, we are not able to systematically study GPT-3.\n\n\n\\textbf{ELMo} is a 2-layer bidirectional long short term memory (Bi-LSTM) \\cite{hochreiter1997long} language model trained on the Billion Word Benchmark dataset \\cite{chelba2013one} that takes up $\\sim$9GB memory. ELMo has 93.6 million parameters. It is different from the three other models since CWE in ELMo integrate the hidden states in all layers instead of using the hidden states of the top layer. \nWe follow standard usage and compute the summation of hidden units over all aggregated layers of the same token as its CWE \\cite{peters2018deep}. CWE of ELMo have 1,024 dimensions. \n\n\n\\textbf{BERT} \\cite{devlin2018BERT} is a bidirectional transformer encoder \\cite{vaswani2017attention} trained on a masked language model and next sentence prediction. BERT is trained on BookCorpus \\cite{zhu2015aligning} and English Wikipedia dumps that take up $\\sim$16GB memory \\cite{bender2021dangers}. We use BERT-small-case with 12 layers that has 110 million parameters. We extract the values of hidden units on the top layer corresponding to the token as its CWE of 768 dimensions.\n\n\\textbf{GPT} \\cite{radford2018improving} is a 12-layer transformer decoder trained on a unidirectional language model on BookCorpus that takes up $\\sim$13GB memory \\cite{zhu2015aligning}. We use the values of hidden units on the top layer corresponding to the token as its CWE. This implementation of GPT has 110 million parameters. The CWE have 768 dimensions.\n\n\n\\textbf{GPT-2} \\cite{radford2019language} is a transformer decoder trained on a unidirectional language model and is a scaled-up version of GPT. GPT-2 is trained on WebText that takes up $\\sim$40GB memory \\cite{radford2019language}.\nWe use GPT-2-small which has 12 layers and 117 million parameters. \nWe use the values of hidden units on the top layer corresponding to the token as its CWE. CWE of GPT-2 have 768 dimensions\n\nWe provide the source code, detailed information, and documentation in our open source repository at \\url{https:\/\/github.com\/weiguowilliam\/CEAT}.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Corpus}\n We need a comprehensive representation of all contexts a word can appear in naturally occurring sentences in order to investigate how bias associated with individual words varies across contexts. Identifying the potential contexts in which a word can be observed is not a trivial task. Consequently, we simulate the distribution of contexts a word appears in, by randomly sampling sentences that the word occurs in a large corpus.\n\n\n\\citet{voigt2018rtgender} have shown that social biases are projected into Reddit comments.\nConsequently, we use a Reddit corpus to generate the distribution of contexts that words of interest appear in. The corpus consists of 500 million comments made in the period between 1\/1\/2014 and 12\/31\/2014.\nWe take all the stimuli used in \\citet{caliskan2017semantics}'s WEAT that measures effect size of bias for social groups and related attributes. For each WEAT type, we retrieve the sentences from the Reddit corpus that contain one of these stimuli. In this way, we collect a great variety of CWE from the Reddit corpus to measure bias comprehensively in a neural language model while simulating the natural distribution of contexts in language. We discuss the justification of sampling 10,000 sentences from the Reddit corpus in the upcoming sections.\n\n\\subsection{Stimuli}\n\\label{subsec:stimuli}\n\\citet{caliskan2017semantics}'s WEAT is inspired by the IAT literature \\cite{greenwald1995implicit, greenwald1998measuring, greenwald2003understanding} that measures implicit associations of concepts by representing them with stimuli. Experts in social psychology and cognitive science select stimuli which are words typically representative of various concepts. These linguistic or sometimes picture-based stimuli are proxies to overall representations of concepts in cognition. Similarly, in the word embedding space, WEAT uses these unambiguous stimuli as semantic representations to study biased associations related to these concepts. Since the stimuli are chosen by experts to most accurately represent concepts, they are not polysemous or ambiguous words. Each WEAT, designed to measure a certain type of association or social group bias, has at least 32 stimuli. There are 8 stimuli for each one of the four concepts. Two of these concepts represent target groups and two of them represent polar attributes. WEAT measures the magnitude of bias by quantifying the standardized differential association or targets with attributes. The larger the set of appropriate stimuli to represent a concept, the more statistically significant and accurate the representation becomes \\cite{caliskan2017semantics}. \n\n\n\\noindent \\textbf{Validation data for intersectional bias.} To investigate intersectional bias with respect to race and gender, we represent members of social groups with target words provided by WEAT and Parada et al. \\citep{caliskan2017semantics,parada2016ethnolinguistic}. WEAT and Parada et al. represent racial categories with frequent given names that signal group membership. WEAT contains a balanced combination of common female and male names of African Americans and European Americans whereas Parada et al. presents the Mexican American names for women and men combined. \nThe intersectional bias detection methods identify attributes that are associated with these target group representations. Human subjects provide the validation set of intersectional attributes with ground truth information in prior work \\citep{ghavami2013intersectional}. The evaluation of intersectional bias detection methods uses this validation set. One limitation of these validation sets is the way they represent gender as a binary category. We will address this constraint in future work by constructing our own validation sets that won't have to represent people by discrete categorical labels of race and gender.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Approach}\n\\label{sec:approach}\nOur approach includes four components. (1) \\cite{caliskan2017semantics}'s WEAT for SWE is the foundation of our approach to summarizing overall bias in CWE generated by neural language models. (2) Random-effects models from the meta analysis literature summarizes the combined effect size for a neural language model's CWE via combining 10,000 WEAT samples by weighting each result with the within-WEAT and between-WEAT variances~\\cite{hedges1983random}. (3) Our novel method IBD automatically detects words associated with intersectional biases. (4) Our novel method EIBD automatically detects words that are uniquely associated with members of multiple minority or disadvantaged groups, but do not overlap with the biases of their constituent minority identities. \n\nSupplementary materials includes the details of all the bias types studied in this paper, namely, WEAT biases introduced by \\citet{caliskan2017semantics} as well as intersectional biases and their validation set introduced by \\citet{ghavami2013intersectional} and \\citet{parada2016ethnolinguistic}.\n\n\\subsection{Word Embedding Association Test (WEAT)}\nWEAT, designed by \\citet{caliskan2017semantics}, measures the effect size of bias in SWE, by quantifying the relative associations of two sets of target words (e.g., career, professional; and family, home) with two sets of polar attributes (e.g., woman, female; and man, male). Two of these WEATs measure baseline associations that are widely accepted such as the attitude towards flowers vs. insects or the attitude towards musical instruments vs. weapons. Human subjects and word embeddings tend to associate flowers and musical instruments with pleasantness that corresponds to positive valence. However, human subjects associate insects and weapons with unpleasantness that corresponds to negative valence. \\citet{greenwald1998measuring} refers to these as universally accepted stereotypes since they are widely shared across human subjects and are not potentially harmful to society. However, the rest of the tests measure the magnitude of social-group associations, such as gender and race stereotypes and attitude towards the elderly or people with disabilities. Biased social-group associations in word embeddings can potentially be prejudiced and harmful to society. Especially, if downstream applications of NLP that use static or dynamic word embeddings to make consequential decisions about individuals, such as resume screening for job candidate selection, perpetuate existing biases to eventually exacerbate historical injustices \\cite{de2019bias, raghavanchallenges}. The formal definition of \\citet{caliskan2017semantics}'s WEAT, the test statistic, and the statistical significance of biased associations are detailed in the appendices.\n\n\n\n\\iffalse\nWe present a formal definition of \\citet{caliskan2017semantics}'s WEAT. Let $X$ and $Y$ be two sets of target words of equal size, and $A$, $B$ be two sets of attribute words. Let $cos(\\vec{a},\\vec{b})$ stand for the cosine similarity between the embeddings of words $a$ and $b$. Here, the vector $\\vec{a}$ is the embedding for word $a$. The test statistic is \n\\vspace{-1mm}\n\\[ s(X,Y,A,B) = \\sum_{x\\in X}{s(x,A,B)} - \\sum_{y\\in Y}{s(y,A,B)} \\]\n\\vspace{-1mm}\nwhere \n\\vspace{-1mm}\n\\[ s(w,A,B) = mean_{a \\in A}cos(\\vec{w}, \\vec{a})-mean_{b \\in B}cos(\\vec{w}, \\vec{b}) \\]\n\nA permutation test calculates the statistical significance of association $s(X,Y,A,B)$. The one-sided $p-value$ is \n\\[ P = Pr_{i} [s(X_{i},Y_{i},A,B)>s(X,Y,A,B))] \\]\nwhere $\\{(X_i,Y_i)\\}_{i}$ represents all the partitions of $X\\cup Y$ in two sets of equal size. Random permutations of these stimuli sets represent the null hypothesis as if the biased associations did not exist so that we can perform a statistical significance test by measuring the unlikelihood of the null hypothesis, given the effect size of WEAT.\n\nThe effect size of bias is calculated as \n\\[ ES = \\frac{mean_{x \\in X}s(x,A,B)-mean_{y \\in Y}s(y,A,B)}{std\\_dev_{w \\in X\\bigcup Y}s(w,A,B)} \\]\n\n\\fi\n\n\\subsection{Intersectional Bias Detection (IBD) }\nIBD identifies words associated with intersectional group members, defined by two social categories simultaneously. Our method automatically detects the attributes that have high associations with the intersectional group from a set of SWE. Analogous to the Word Embedding Factual Association Test (WEFAT) \\citep{caliskan2017semantics}, we measure the standardized differential association of a single stimulus $w \\in W$ with two social groups $A$ and $B$ using the following statistic.\n\\vspace{-2mm}\n\\[ s(w, A, B) = \\frac{\\textrm{mean}_{a \\in A} \\textrm{cos}(\\vec{w}, \\vec{a}) - \\textrm{mean}_{b \\in B} \\textrm{cos}(\\vec{w}, \\vec{b})}{\\textrm{std-dev}_{x \\in A \\cup B}\\textrm{cos}(\\vec{w}, \\vec{x})}\\]\n\\vspace{-2mm}\n\nWe refer to the above statistic as the \\textbf{association score}, which is used by WEFAT to verify that gender statistics are embedded in linguistic regularities. Targets $A$ and $B$ are words that represent males (e.g., he, him) and females (e.g., she, her) and $W$ is a set of occupations. For example, \\textit{nurse} has an association score $s(nurse, A, B)$ that measures effect size of gender associations. WEFAT has been shown to have high predictive validity ($\\rho=0.90$) in quantifying facts about the world \\citep{caliskan2017semantics}. \n\nWe extend WEFAT's {\\em gender} association measurement to quantify the relative association to other social categories (e.g., race), by following an approach similar to lexicon induction that quantifies certain associations without annotating large-scale ground truth training data \\cite{hatzivassiloglou1997predicting, riloff2003learning, turney2003measuring}. Let $P_i = (A_i,B_i$) (e.g., African American and European American) be a pair of social groups, and $W$ be a set of attribute words.\nWe calculate the association score $s(w,A_i,B_i)$ for $w \\in W$. If $s(w,A_i,B_i)$ is greater than the positive effect size threshold $t$, $w$ is detected to be associated with group $A_i$.\nLet $W_i = \\{w|s(w,A_i,B_i)>t, w \\in W\\}$ be the associated word list for each pair $P_i$. \n\nWe detect the biased attributes associated with an intersectional group $C_{mn}$ defined by two social categories $C_{1n}, C_{m1}$ with $M$ and $N$ subcategories ($C_{11}, \\dots, C_{mn}$) (e.g., African American females by race ($C_{1n}$) and gender ($C_{m1}$)). We assume, there are three racial categories $M =3$, and two gender categories $N=2$ in our experiments because of the limited structure of representation for individuals in the validation dataset as well as the stimuli. We plan to extend these methods to non-binary individuals and non-categorical representations. However, precisely validating such an approach would require us to construct the corresponding validation sets, which currently don't exist. \\textbf{Generalizing the method to represent humans with continuous values as opposed to categorical group labels is left to future work.} There are in total $ M \\times N $ combinations of intersectional groups $C_{mn}$. We use all groups $C_{mn}$ to build WEFAT pairs\n$P_{ij} = (C_{11}, C_{ij}), i = 1,...,M, j = 1,...,N$. Then, we detect lists of words associated with each pair $W_{ij}, i = 1,...,M, j = 1,...,N$ based on threshold $t$ determined by an ROC curve. We detect the attributes highly associated with the intersectional group, for example C$_{11}$, from all $( M\\times N)$ WEFAT pairs.\nWe define the words associated with intersectional biases of group C$_{11}$ as $W_{IB}$ and these words are identified by \n\\vspace{-3mm}\n\n\\[W_{IB} = \\bigcup_{\\substack{1\\leq i\\leq M\\\\1\\leq j\\leq N}}W_{IB_{ij}},\\;\n\\] \nwhere \n\\vspace{-5mm}\n \\[ \\hspace{12mm} W_{IB_{ij}} = \\{w|s(w,C_{11},C{_{ij}})>t_{mn}, w \\in W_{IB_{mn}}\\} \\] \n\n\\noindent where \n\\vspace{-3mm}\n\\[ W_{IB_{mn}} = \\{(\\bigcup_{\\substack{1\\leq i\\leq M\\\\1\\leq j\\leq N}}W_{ij})\\cup W_{random}\\} \\] \n\n\\noindent W$_{11}$ contains validated words associated with C$_{11}$. Each W$_{ij}$ contains validated words associated with one intersectional group \\cite{ghavami2013intersectional}. W$_{random}$ contains random words, which are stimuli taken from WEAT that are not associated with any C$_{ij}$, thus represent true negatives. \n\n\n\nTo identify the thresholds, we treat IBD as a one-vs-all verification classifier in machine learning to determine whether attributes belong to group $C_{11}$. \nWe select the threshold with the highest value of $true\\: positive\\: rate - false\\: positive\\: rate$ ($TPR - FPR$). When multiple thresholds have the same values, we select the one with the highest $TP$ to detect more attributes associated with $C_{11}$. Detection accuracy is calculated as true positives plus true negatives over true positives plus true negatives plus false positives plus false negatives $(\\frac{TP+TN}{TP+TN+FP+FN})$. The attributes which are associated with $C_{11}$ and are detected as $C_{11}$ are $TP$. The attributes which are not associated with $C_{11}$ and are not detected as $C_{11}$ are $TN$. The attributes which are associated with $C_{11}$ but are not detected as $C_{11}$ are $FN$. The attributes which are not associated with $C_{11}$ but are detected as $C_{11}$ are $FP$.\n\n\n\n\n\n\n\n\n\\subsection{Emergent Intersectional Bias Detection (EIBD)}\nEIBD identifies words that are uniquely associated with intersectional group members. These emergent biases are only associated with the intersectional group (e.g., African American females $C_{11}$) but not associated with its constituent category such as African Americans $S_{1n}$ or females $S_{m1}$. EIBD is a modified and extended version of IBD. The formal definition is in the appendices.\n\n\n\n\nConceptually, to detect words uniquely associated with African American females in a set of attributes $W$, we assume there are two classes (females, males) of gender and two classes (African Americans, European Americans) of race.\nWe measure the relative association of all words in $W$ first\nwith African American females and African American males, second with African American females and European American females, third with African American females and European American males. (Fourth is the comparison of the same groups, which leads to $d=0$ effect size, which is always below the detection threshold.) The union of attributes with an association score greater than the selected threshold represents intersectional biases associated with African American females. \nThen, we calculate the association scores of these IBD attributes first with females and males, second with African Americans and European Americans. We remove the attributes with scores greater than the selected threshold from these IBD attributes, that are highly associated with single social categories. The union of the remaining attributes are the emergent intersectional biases.\n\n\n\n\n\n\n\n\\subsection{Contextualized Embedding Association Test (CEAT)}\nCEAT quantifies social biases in CWE by extending the WEAT methodology that measures human-like biases in SWE \\citep{caliskan2017semantics}. \nWEAT's bias metric is effect size (Cohen's $d$). In CWE, since embeddings of the same word vary based on context, applying WEAT to a biased set of CWE will not measure bias comprehensively. To deal with a range of dynamic embeddings representing individual words, CEAT measures the distribution of effect sizes that are embedded in a neural language model. \n\n\n\n\n\n\nIn WEAT's formal definition \\citep{caliskan2017semantics}, $X$ and $Y$ are two sets of target words of equal size; $A$ and $B$ are two sets of evaluative polar attribute words of equal size. Each word in these sets of words is referred to as a stimulus. Let $cos(\\vec{a},\\vec{b})$ stand for the cosine similarity between vectors $\\vec{a}$ and $\\vec{b}$. \nWEAT measures the magnitude of bias by computing the effect size ($ES$) which is the standardized differential association of the targets and attributes. The $p$-value ($P_w$) of WEAT measures the probability of observing the effect size in the null hypothesis, in case biased associations did not exist. According to Cohen's effect size metric, $d > \\mid 0.5 \\mid$ and $d > \\mid 0.8\\mid$ are medium and large effect sizes, respectively \\citep{rice2005comparing}.\n\n\n\nIn a neural language model, each stimulus $s$ from WEAT contained in $n_s$ input sentences has at most $n_s$ different CWE $\\vec{s_1},..., \\vec{s_{n_s}}$ depending on the context in which it appears.\nIf we calculate effect size $ES(X,Y,A,B)$ with all different $\\vec{s}$ for a stimulus $s \\in X$ and keep the CWE for other stimuli unchanged, there will be at most $n_s$ different values of effect size. For example, if we assume each stimulus $s$ occurs in 2 contexts and each set in $X, Y, A, B$ has 5 stimuli, the total number of combinations for all the CWE of stimuli will be $2^{5\\times4} = 1,048,576$. The numerous possible values of $ES(X,Y,A,B)$ construct a \\textit{distribution} of effect sizes, therefore we extend WEAT to CEAT.\n\n\n\nFor each CEAT, all the sentences, where a CEAT stimulus occurs, are retrieved from the Reddit corpus. Then, we generate the corresponding CWE from these sentences with randomly varying contexts. In this way, we generate $n_s$ CWE from $n_s$ extracted sentences for each stimulus $s$, where $n_s$ can vary according to the contextual variance of each stimulus.\nWe sample random combinations of CWE for each stimulus $N$ times. In the $i^{th}$ sample out of $N$, for each stimulus that appears in at least $N$ sentences, \nwe randomly sample one of its CWE vectors without replacement. If a stimulus occurs in less than $N$ sentences, especially when $N$ is very large, we randomly sample from its CWE vectors with replacement so that they can be reused while preserving their distribution. We provide the analysis and extended results in the appendices for both $N=1,000$ and $N=10,000$, which result in similar bias magnitudes. Based on the sampled CWEs, we calculate each sample's effect size $ES_i(X,Y,A,B)$, sample variance $V_i(X,Y,A,B)$ and $p$-value $P_{w_i}(X,Y,A,B)$ in WEAT. Then, we generate $N$ of these samples to approximate the distribution of effect sizes via CEAT. \n\n\n\n\n\n\n\n\n\nThe distribution of bias effects in CEAT represents random effects computed by WEAT where we do not expect to observe the same effect size due to variance in context \\cite{hedges1983random}. As a result, in order to provide comprehensive summary statistics, we applied a random-effects model from the validated meta-analysis literature to compute the weighted mean of the effect sizes and statistical significance \\citep{rosenthal2002meta, borenstein2007meta}. The summary of the effect magnitude of a particular bias in a neural language model, namely combined effect size (CES), is the weighted mean of a distribution of random effects,\n\\vspace{-1mm}\n\\[CES(X,Y,A,B) = \\frac{\\sum_{i=1}^{N}v_i ES_i}{\\sum_{i=1}^{N}v_i}\\]\n\\vspace{-2mm}\n\n\\noindent where $v_i$ is the inverse of the sum of in-sample variance $V_i$ and between-sample variance in the distribution of random effects $\\sigma_{between}^2$. Methodological details are in the appendices.\n\n\\iffalse\nBased on the central limit theorem, the limiting form of the distribution of $\\frac{CES}{SE(CES)}$ is the standard normal distribution \\citep{montgomery2010applied}.\nThen the statistical significance of CES, two-tailed $p$-value of the hypothesis that there is no difference between all the contextualized variations of the two sets of target words in terms of their relative similarity to two sets of attribute words is given by the following formula, where $\\Phi$ is the standard normal cumulative distribution function and $SE$ stands for the standard error. \n\\[ P_c(X,Y,A,B) = 2 \\times [1 - \\Phi ( | \\frac{CES}{SE(CES)} | ) ] \\]\n\\fi\n\n\\subsection{Random-Effects Model}\n\\label{subsec:random}\n\nMeta-analysis is the statistical procedure for combining data from multiple studies \\cite{hedges1998fixed}. Meta-analysis describes the results of each separate study by a numerical index (e.g., effect size) and then summarizes the results into combined statistics. In bias measurements, we are dealing with effect size. Based on different assumptions whether the effect size is fixed or not, there are two kinds of methods: \\textit{fixed-effects} model and \\textit{random-effects} model. \nFixed-effects model expects results with fixed-effect sizes from different intervention studies. On the other hand, random-effects model treats the effect size as they are samples from a random distribution of all possible effect sizes \\cite{dersimonian1986meta,hedges2014statistical}. The expected results of different intervention studies in the random-effects model don't have to match other studies' results. \nIn our case, since the effect sizes calculated with the CWE in different contexts are expected to vary, we cannot assume a fixed-effects model. Instead, we use a random-effects model that is appropriate for the type of data we are studying. \n\nWe apply a random-effects model from the validated meta-analysis literature using the methods of \\citet{hedges1998fixed}. Specifically, we describe the procedures for estimating the comprehensive summary statistic, \\textbf{combined effect size (CES)}, which is the weighted mean of a distribution of random-effect sizes. Each effect size is weighted by the variance in calculating that particular effect size in addition to the overall variance among all the random-effect sizes. \n\nWe combine effect size estimates from $N$ independent WEATs. The details of CES are in the appendices.\n\n\n\n\\subsection{Intersectional and Emergent Intersectional Bias Detection in Static Word Embeddings}\n\n\n\n\n\n \\begin{figure*}[ht!]\n \\centering\n {%\n\\begin{tabular}{cccc}\n \\includegraphics[width=.2\\textwidth]{plot\/roc_supp\/af_inter.pdf} &\n \\includegraphics[width=.2\\textwidth]{plot\/roc_supp\/af_unique.pdf} &\n \\includegraphics[width=.2\\textwidth]{plot\/roc_supp\/lf_inter.pdf} &\n\\includegraphics[width=.2\\textwidth]{plot\/roc_supp\/lf_unique.pdf}\\\\\n \\end{tabular}}\n \\vspace{-2mm} \\caption{\\textbf{ROC curves of IBD and EIBD for African American females (AF) and Mexican American females (MF).} The value that maximizes the $true\\: positive\\: rate\\: -\\: false\\: positive\\: rate$ is selected as the optimal threshold marked with a dot.\n `emerg inter bias' stands for emergent intersectional bias. \n\\vspace{-4mm} }\n \\label{fig:roc}\n\\end{figure*}\n\n\\addtolength{\\textfloatsep}{-0.05in}\n\n\n\n\n\n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\\section{Results and Evaluation}\n\\label{sec:experiments}\n\n\nWe measure ten types of social biases via WEAT (C1-C10) and construct our own intersectional bias tests in ELMo, BERT, GPT, and GPT-2. Accordingly, we present four novel intersectional bias tests via IBD and EIBD for studying African American, European American, and Mexican American men and women.\n\nWe use the stimuli introduced in Section~\\ref{subsec:stimuli} to represent the target groups. For intersectional and emergent bias tests, we use the attributes associated with the intersectional minority or disadvantaged group members vs the majority European American males as the two polar attribute sets. We sample $N=10,000$ combinations of CWE for each CEAT since according to various evaluation trials, the resulting CES and $p$-value remain consistent under this parameter.\n\n\n\\subsection{Evaluation of IBD and EIBD}\n\\label{sec:evaluation}\n\n We use IBD and EIBD to automatically detect and retrieve the intersectional and emergent biases associated with intersectional group members (e.g., African American females, Mexican American females) in GloVe SWE. \nTo evaluate our methods IBD and EIBD, we use validated stimuli provided in prior work that represents each social group with frequent given names, as explained in Section~\\ref{sec:data}. \nIBD and EIBD experiments use the same test set consisting of 98 attributes associated with 2 groups defined by gender (females, males), 3 groups defined by race (African American, European American, Mexican American), 6 intersectional groups in total defined by race and gender, in addition to random words taken from WEAT not associated with any group \\cite{ghavami2013intersectional}. These random words represent the true negatives for evaluating the identification task.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe draw the ROC curves of four bias detection tasks in Figure~\\ref{fig:roc}, then select the highest value of\n$TPR - FPR$ as thresholds for each intersectional group. \nIBD achieves an accuracy of 81.6\\% and 82.7\\%, respectively, when detecting the intersectional biases of African American females and Mexican American females, where the random correct identification rates are 14.3\\% and 13.3\\%. EIBD reaches an accuracy of 84.7\\% and 65.3\\%, respectively, when detecting the emergent intersectional biases unique to African American females and Mexican American females. The probability of random correct attribute detection in EIBD tasks are 9.2\\% and 6.1\\%. Intersectional biases have the highest magnitude compared to other biases across all language models, potentially disadvantaging members that belong to multiple minority groups in downstream applications.\n\n\n\n\nThe current validation set with ground truth information about each word constrains our evaluation to a closed-world machine learning classification task, where we know the category each stimulus belongs to. On the other hand, evaluating the entire semantic space resembles an open-world machine learning problem where millions of stimuli in the entire word embedding vocabulary belong to unknown categories, thus require human-subject annotation studies. In future work, a human subject study can further evaluate the threshold selection criteria, which would require validating a large set of biases retrieved from the entire vocabulary.\n \n \n \n \n \\begin{table*}[t]\n\n \\begin{minipage}[c]{0.68\\textwidth}\n\\centering\n\n\\vspace{-3mm}\n\\label{table:socialbias-measure}\n \\resizebox{0.99\\textwidth}{!} {%\n\\begin{tabular}{|p{3mm} l | r | cc | cc | cc |cc |}\n\\hline\n\\multicolumn{3}{| c |}{ \\multirow{2}{*}{\\textbf{Test}}} &\n \\multicolumn{2}{c|}{\\textbf{ELMo}} &\n \\multicolumn{2}{c|}{\\textbf{BERT}} &\n \\multicolumn{2}{c|}{\\textbf{GPT}} &\n \\multicolumn{2}{c |}{\\textbf{GPT-2}} \\\\ \\cline{4-11} \n \\multicolumn{3}{|c|}{} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} \\\\ \\hline\n \n \n\\multirow{2}{*}{\\shortstack{C1:}} & Flowers\/Insects & random & \\cellcolor{darkgray}1.40 & $<10^{-30}$ & \\cellcolor{darkgray}0.97 & $<10^{-30}$ & \\cellcolor{darkgray}1.04 & $<10^{-30}$ & 0.14 & $<10^{-30}$ \\\\\n\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{darkgray}{1.35} & $<10^{-30}$ & \\cellcolor{mediumgray}{0.64 } & $<10^{-30}$ & \\cellcolor{darkgray}{1.01 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.21 } & $<10^{-30}$ \\\\ \\hline\n\n\n\n\\multirow{2}{*}{{\\shortstack{C2:}}} & Instruments\/Weapons & random & \\cellcolor{darkgray}1.56 & $<10^{-30}$ & \\cellcolor{darkgray}0.94 & $<10^{-30}$ & \\cellcolor{darkgray}1.12 & $<10^{-30}$ & \\cellcolor{lightgray}-0.27 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{darkgray}{1.59} & $<10^{-30}$ & \\cellcolor{mediumgray}{0.54} & $<10^{-30}$ & \\cellcolor{darkgray}{1.09} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.21 } & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C3:}}} & EA\/AA names & random & \\cellcolor{lightgray}0.49 & $<10^{-30}$ & \\cellcolor{lightgray}0.44 & $<10^{-30}$ & -0.11 & $<10^{-30}$ & -0.19 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.47 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.31} & $<10^{-30}$ & -0.10 & $<10^{-30}$ & 0.09 & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C4:}}} & EA\/AA names & random & 0.15 & $<10^{-30}$ & \\cellcolor{lightgray}0.47 & $<10^{-30}$ & 0.01 & $<10^{-2}$ & \\cellcolor{lightgray}-0.23 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.49 } & $<10^{-30}$ & 0.00 & $0.20$ & -0.13 & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C5:}}} & EA\/AA names & random & 0.11 & $<10^{-30}$ & 0.02 & $<10^{-7}$ & 0.07 & $<10^{-30}$ & \\cellcolor{lightgray}-0.21 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & 0.17 & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.04 & $<10^{-27}$ & -0.01 & 0.11 \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C6:}}} & Males\/Female names & random & \\cellcolor{darkgray}1.27 & $<10^{-30}$ & \\cellcolor{darkgray}0.92 & $<10^{-30}$ & 0.19 & $<10^{-30}$ & \\cellcolor{lightgray}0.36 & $<10^{-30}$ \\\\\n & Career\/Family & fixed & \\cellcolor{darkgray}{1.31 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.41} & $<10^{-30}$ & 0.11 & $<10^{-30}$ & \\cellcolor{lightgray}{0.34} & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C7:}}} & Math\/Arts & random & \\cellcolor{mediumgray}0.64 & $<10^{-30}$ & \\cellcolor{lightgray}0.41 & $<10^{-30}$ & \\cellcolor{lightgray}0.24 & $<10^{-30}$ & -0.01 & $<10^{-2}$ \\\\\n & Male\/Female terms & fixed & \\cellcolor{darkgray}{0.71 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.20 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.23} & $<10^{-30}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C8:}}} & Science\/Arts & random & \\cellcolor{lightgray}0.33 & $<10^{-30}$ & -0.07 & $<10^{-30}$ & \\cellcolor{lightgray}0.26 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\n & Male\/Female terms & fixed & \\cellcolor{mediumgray}{0.51 } & $<10^{-30}$ & 0.17 & $<10^{-30}$ & \\cellcolor{lightgray}{0.35} & $<10^{-30}$ & -0.05 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C9:}}} & Mental\/Physical disease & random & \\cellcolor{darkgray}1.00 & $<10^{-30}$ & \\cellcolor{mediumgray}0.53 & $<10^{-30}$ & 0.08 & $<10^{-29}$ & 0.10 & $<10^{-30}$ \\\\\n & Temporary\/Permanent & fixed & \\cellcolor{darkgray}{1.01} & $<10^{-30}$ & \\cellcolor{lightgray}{0.40} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.23 } & $<10^{-30}$ & \\cellcolor{lightgray}{-0.21 } & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C10:}}} & Young\/Old people's names & random & 0.11 & $<10^{-30}$ & -0.01 & 0.016 & 0.07 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.24} & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.04 & $<10^{-17}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I1:}}} & AF\/EM names & random & \\cellcolor{darkgray}1.24 & $<10^{-30}$ & \\cellcolor{mediumgray}0.77 & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.02 & $<10^{-2}$ \\\\\n & AF\/EM intersectional & fixed & \\cellcolor{darkgray}{1.25} & $<10^{-30}$ & \\cellcolor{darkgray}{0.98 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & -0.19 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I2:}}} & AF\/EM names & random & \\cellcolor{darkgray}1.25 & $<10^{-30}$ & \\cellcolor{mediumgray}0.67 & $<10^{-30}$ & -0.09 & $<10^{-30}$ & 0.02 & $<10^{-2}$ \\\\\n & {\\small AF emergent\/EM intersectional} & fixed & \\cellcolor{darkgray}{1.27} & $<10^{-30}$ & \\cellcolor{darkgray}{1.00} & $<10^{-30}$ & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I3:}}} & MF\/EM names & random & \\cellcolor{darkgray}1.31 & $<10^{-30}$ & \\cellcolor{mediumgray}0.68 & $<10^{-30}$ & -0.06 & $<10^{-30}$ & \\cellcolor{lightgray}0.38 & $<10^{-30}$ \\\\\n & MF\/EM intersectional & fixed & \\cellcolor{darkgray}{1.29}& $<10^{-30}$ & \\cellcolor{mediumgray}{0.51} & $<10^{-30}$ & 0.00 & 0.81 & \\cellcolor{lightgray}{0.32 } & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I4:}}} & MF\/EM names & random & \\cellcolor{darkgray} 1.51 & $<10^{-30}$ &\\cellcolor{darkgray} 0.86 & $<10^{-30}$ & 0.16 & $<10^{-30}$ & \\cellcolor{lightgray}-0.32 & $<10^{-30}$ \\\\\n & {\\small MF emergent\/EM intersectional} & fixed & \\cellcolor{darkgray}{1.43} & \n $<10^{-30}$ & \\cellcolor{mediumgray}{0.58} & $<10^{-30}$ & \\cellcolor{lightgray}{0.20} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.25} & $<10^{-30}$ \\\\ \\hline \n\\multicolumn{11}{c}{{\\small $^{\\ast}$Pleasant and unpleasant attributes used to measure valence and attitudes towards targets from \\citet{greenwald1998measuring}.}}\\\\\n\n\\end{tabular}\n}\n \\end{minipage}\\hfill\n \\begin{minipage}[c]{0.32\\textwidth}\n\\vspace{-1mm} \\caption{\n\\textbf{CEAT measures of social and intersectional biases in language models.} We report the overall magnitude of bias in language models with CES ($d$, rounded down) and statistical significance with combined $p$-values ($p$, rounded up). CES pools $N = 10,000$ samples from a random-effects model. The first row for each bias test uses completely random samples, whereas the second row for the bias test uses the same sentences to generate CWE across all neural language models.\n $Ci$ stands for the $i^{th}$ WEAT in \\citet{caliskan2017semantics}'s Table 1. $Ii$ stands for our tests constructed for measuring intersectional biases. $A\\_$ stands for African Americans, $E\\_$ for European Americans, $M\\_$ for Mexican Americans, $\\_F$ for females, and $\\_M$ for males. Light, medium, and dark gray shading of combined $d$ values (CES) indicates small, medium, and large effect size, respectively. \n } \\label{table:socialbias-measure}\n\n \\end{minipage}\n\\vspace{-3mm} \n \\end{table*}\n\n \n \\iffalse\n\n\\begin{table*}[t]\n\\centering\n\\caption{\n\\textbf{CEAT for social and intersectional biases.} We report the overall magnitude of bias in a language model with CES ($d$, rounded down) and its statistical significance with combined $p$-values ($p$, rounded up). CES pools $N = 10,000$ samples from a random-effects model. The first row for each bias test uses completely random samples, whereas the second row for the bias test uses the same sentences to generate CWE across all neural language models.\n $Ci$ stands for the $i^{th}$ WEAT test in \\citet{caliskan2017semantics}'s Table 1. $Ii$ stands for the novel tests constructed for intersectional biases. $A\\_$ stands for African Americans. $E\\_$ stands for European Americans. $M\\_$ stands for Mexican Americans. $\\_F$ stands for females. $\\_M$ stands for males. Light, medium, and dark gray shading of combined $d$ values (CES) indicates small, medium, and large effect size respectively. \n}\n\n\\vspace{-3mm}\n\\label{table:socialbias-measure}\n \\resizebox{0.63\\textwidth}{!} {%\n\\begin{tabular}{|p{7mm} l | r | cc | cc | cc |cc |}\n\\hline\n\\multicolumn{3}{| c |}{ \\multirow{2}{*}{\\textbf{Test}}} &\n \\multicolumn{2}{c|}{\\textbf{ELMo}} &\n \\multicolumn{2}{c|}{\\textbf{BERT}} &\n \\multicolumn{2}{c|}{\\textbf{GPT}} &\n \\multicolumn{2}{c |}{\\textbf{GPT-2}} \\\\ \\cline{4-11} \n \\multicolumn{3}{|c|}{} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} & \\textbf{$d$} & \\textbf{$p$} \\\\ \\hline\n \n \n\\multirow{2}{*}{\\shortstack{C1:}} & Flowers\/Insects & random & \\cellcolor{darkgray}1.40 & $<10^{-30}$ & \\cellcolor{darkgray}0.97 & $<10^{-30}$ & \\cellcolor{darkgray}1.04 & $<10^{-30}$ & 0.14 & $<10^{-30}$ \\\\\n\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{darkgray}{1.35} & $<10^{-30}$ & \\cellcolor{mediumgray}{0.64 } & $<10^{-30}$ & \\cellcolor{darkgray}{1.01 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.21 } & $<10^{-30}$ \\\\ \\hline\n\n\n\n\\multirow{2}{*}{{\\shortstack{C2:}}} & Instruments\/Weapons & random & \\cellcolor{darkgray}1.56 & $<10^{-30}$ & \\cellcolor{darkgray}0.94 & $<10^{-30}$ & \\cellcolor{darkgray}1.12 & $<10^{-30}$ & \\cellcolor{lightgray}-0.27 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{darkgray}{1.59} & $<10^{-30}$ & \\cellcolor{mediumgray}{0.54} & $<10^{-30}$ & \\cellcolor{darkgray}{1.09} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.21 } & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C3:}}} & EA\/AA names & random & \\cellcolor{lightgray}0.49 & $<10^{-30}$ & \\cellcolor{lightgray}0.44 & $<10^{-30}$ & -0.11 & $<10^{-30}$ & -0.19 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.47 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.31} & $<10^{-30}$ & -0.10 & $<10^{-30}$ & 0.09 & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C4:}}} & EA\/AA names & random & 0.15 & $<10^{-30}$ & \\cellcolor{lightgray}0.47 & $<10^{-30}$ & 0.01 & $<10^{-2}$ & \\cellcolor{lightgray}-0.23 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.49 } & $<10^{-30}$ & 0.00 & $0.20$ & -0.13 & $<10^{-30}$ \\\\ \\hline\n \n \\multirow{2}{*}{{\\shortstack{C5:}}} & EA\/AA names & random & 0.11 & $<10^{-30}$ & 0.02 & $<10^{-7}$ & 0.07 & $<10^{-30}$ & \\cellcolor{lightgray}-0.21 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & 0.17 & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.04 & $<10^{-27}$ & -0.01 & 0.11 \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C6:}}} & Males\/Female names & random & \\cellcolor{darkgray}1.27 & $<10^{-30}$ & \\cellcolor{darkgray}0.92 & $<10^{-30}$ & 0.19 & $<10^{-30}$ & \\cellcolor{lightgray}0.36 & $<10^{-30}$ \\\\\n & Career\/Family & fixed & \\cellcolor{darkgray}{1.31 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.41} & $<10^{-30}$ & 0.11 & $<10^{-30}$ & \\cellcolor{lightgray}{0.34} & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C7:}}} & Math\/Arts & random & \\cellcolor{mediumgray}0.64 & $<10^{-30}$ & \\cellcolor{lightgray}0.41 & $<10^{-30}$ & \\cellcolor{lightgray}0.24 & $<10^{-30}$ & -0.01 & $<10^{-2}$ \\\\\n & Male\/Female terms & fixed & \\cellcolor{darkgray}{0.71 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.20 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.23} & $<10^{-30}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C8:}}} & Science\/Arts & random & \\cellcolor{lightgray}0.33 & $<10^{-30}$ & -0.07 & $<10^{-30}$ & \\cellcolor{lightgray}0.26 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\n & Male\/Female terms & fixed & \\cellcolor{mediumgray}{0.51 } & $<10^{-30}$ & 0.17 & $<10^{-30}$ & \\cellcolor{lightgray}{0.35} & $<10^{-30}$ & -0.05 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C9:}}} & Mental\/Physical disease & random & \\cellcolor{darkgray}1.00 & $<10^{-30}$ & \\cellcolor{mediumgray}0.53 & $<10^{-30}$ & 0.08 & $<10^{-29}$ & 0.10 & $<10^{-30}$ \\\\\n & Temporary\/Permanent & fixed & \\cellcolor{darkgray}{1.01} & $<10^{-30}$ & \\cellcolor{lightgray}{0.40} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.23 } & $<10^{-30}$ & \\cellcolor{lightgray}{-0.21 } & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{C10:}}} & Young\/Old people's names & random & 0.11 & $<10^{-30}$ & -0.01 & 0.016 & 0.07 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\n & Pleasant\/Unpleasant$^{\\ast}$ & fixed & \\cellcolor{lightgray}{0.24} & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.04 & $<10^{-17}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I1:}}} & AF\/EM names & random & \\cellcolor{darkgray}1.24 & $<10^{-30}$ & \\cellcolor{mediumgray}0.77 & $<10^{-30}$ & 0.07 & $<10^{-30}$ & 0.02 & $<10^{-2}$ \\\\\n & AF\/EM intersectional & fixed & \\cellcolor{darkgray}{1.25} & $<10^{-30}$ & \\cellcolor{darkgray}{0.98 } & $<10^{-30}$ & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & -0.19 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I2:}}} & AF\/EM names & random & \\cellcolor{darkgray}1.25 & $<10^{-30}$ & \\cellcolor{mediumgray}0.67 & $<10^{-30}$ & -0.09 & $<10^{-30}$ & 0.02 & $<10^{-2}$ \\\\\n & AF emergent\/EM intersectional & fixed & \\cellcolor{darkgray}{1.27} & $<10^{-30}$ & \\cellcolor{darkgray}{1.00} & $<10^{-30}$ & \\cellcolor{lightgray}{0.23 } & $<10^{-30}$ & -0.14 & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I3:}}} & MF\/EM names & random & \\cellcolor{darkgray}1.31 & $<10^{-30}$ & \\cellcolor{mediumgray}0.68 & $<10^{-30}$ & -0.06 & $<10^{-30}$ & \\cellcolor{lightgray}0.38 & $<10^{-30}$ \\\\\n & MF\/EM intersectional & fixed & \\cellcolor{darkgray}{1.29}& $<10^{-30}$ & \\cellcolor{mediumgray}{0.51} & $<10^{-30}$ & 0.00 & 0.81 & \\cellcolor{lightgray}{0.32 } & $<10^{-30}$ \\\\ \\hline \n \n \\multirow{2}{*}{{\\shortstack{I4:}}} & MF\/EM names & random & \\cellcolor{darkgray} 1.51 & $<10^{-30}$ &\\cellcolor{darkgray} 0.86 & $<10^{-30}$ & 0.16 & $<10^{-30}$ & \\cellcolor{lightgray}-0.32 & $<10^{-30}$ \\\\\n & MF emergent\/EM intersectional & fixed & \\cellcolor{darkgray}{1.43} & \n $<10^{-30}$ & \\cellcolor{mediumgray}{0.58} & $<10^{-30}$ & \\cellcolor{lightgray}{0.20} & $<10^{-30}$ & \\cellcolor{lightgray}{-0.25} & $<10^{-30}$ \\\\ \\hline \n \\multicolumn{9}{c}{\\hspace{0mm} $^{\\ast}$\\footnotesize{(Un)pleasant attributes used to measure valence and attitudes towards targets from \\citet{greenwald1998measuring}.}}\n\n\\end{tabular}\n}\n\\end{table*}\n\n\\fi\n\\subsection{Evaluation of CEAT} Congruent with \\citet{caliskan2017semantics}'s WEAT findings, Table~\\ref{table:socialbias-measure} presents significant effect sizes for all previously documented and validated biases. GPT-2 exhibited less bias than other neural language models. \nOur method CEAT, designed for CWEs, computes the combined bias score of a distribution of effect sizes present in neural language models. We find that the effect magnitudes of biases reported by Tan and Celis \\citep{tan2019assessing} are individual samples in the distributions generated by CEAT. We can view their method as a special case of CEAT that calculates the individual bias scores of a few pre-selected samples. In order to comprehensively measure the overall bias score in a neural language model, we apply a random-effects model from the meta-analysis literature that computes combined effect size and combined statistical significance from a distribution of bias measurements. As a result, when CEAT reports significant results, some of the corresponding bias scores in prior work are not statistically significant. Furthermore, our results indicate statistically significant bias in the opposite direction in some cases. These negative results suggest that some WEAT stimuli tend to occur in stereotype-incongruent contexts more frequently.\n\nWe sampled combinations of CWE $10,000$ times for each CEAT test; nonetheless, we observed varying intensities of the same social bias in different contexts. Using a completely random set vs fixed set of contexts derived from $10,000$ sentences lead to low variance in corresponding bias scores. Using a fixed set of contexts for each model makes it possible to evaluate the magnitude of bias across models for the same variables. Experiments conducted with $1,000$, $5,000$, $10,000$ samples of CWE lead to similar bias scores with low variance. As a result, the number of samples can be adjusted according to computational resources. However, future work on evaluating the lower bound of sampling size with respect to model and corpus characteristics would optimize the sampling process. Accordingly, the computation of overall bias in the language model would become more efficient.\n\n\n\\subsection{IBD, EIBD, and CEAT Results} We report the overall magnitude of bias (CES) and $p$-value in Table~\\ref{table:socialbias-measure}. We pick an example from Table~\\ref{table:socialbias-measure} that reflects the great disparity in bias magnitudes between the two models. We present the distribution histograms of effect sizes in Figure~\\ref{fig:weat}, which show the overall biases that can be measured with a comprehensive contextualized bias test related to the emergent biases associated with occurrences of stimuli unambiguously regarding Mexican American females (See row I4 in Table~\\ref{table:socialbias-measure}) with ELMo and GPT-2. \nThe distribution plots for other bias tests are provided in our project repository.\n\n\n\nWe find that CEAT uncovers more evidence of intersectional bias than gender or racial biases. This findings suggest that, members of multiple minority or disadvantaged groups are associated with the strongest levels of bias in neural language representations. To quantify the intersectional biases in CWEs, we construct tests I1-I4. Tests with Mexican American females tend to have stronger bias with a higher CES than those with African American females. \nSpecifically, 13 of 16 instances in intersection-related tests (I1-I4) have significant stereotype-congruent CES; 9 of 12 instances in gender-related tests (C6-C8) have significant stereotype-congruent CES; 8 of 12 instances in race-related tests (C3-C5) have significant stereotype-congruent CES. In gender bias tests, the gender associations with career and family are stronger than other biased gender associations. In all models, the significantly biased intersectionality associations have larger effect sizes than racial biases. \n\n\n\n\nAccording to CEAT results in Table~\\ref{table:socialbias-measure}, ELMo is the most biased whereas GPT-2 is the least biased with respect to the types of biases CEAT measures. We notice that significant negative CES exist in BERT, GPT and GPT-2, which imply that stereotype-incongruent biases with small effect size exist. \n \n\\section{Discussion}\n \\label{sec:discussion}\n\n\n\n\n\n\\iffalse\nfor +- : significant positive results in Tan: c1-gpt2, c7-gpt2, c8-BERT,gpt2,c3-gpt,gpt2, c4-gpt2,gpt2 (this mean CES is negative)\nI'm checking the last condition\nthere is\ne.g., C9-gpt, their es is -1.39, it should definitely be significant\nC2-gpt2: -0.49; C10-gpt2: -0.45.\nThere're other negative results, but are not so big\n\\fi\n\n\nAccording to our findings, GPT-2 has the highest variance in bias magnitudes followed by GPT, BERT, and ELMo (see an example in Figure~\\ref{fig:weat}). The overall magnitude of bias decreases in the same order for the types of biases we measured. The similar number of parameters in these models or the size of the training corpora do not explain the distribution of bias that we observe w.r.t. variance and overall magnitude. However, \\citet{ethayarajh2019contextual} note the same descending pattern when measuring words' self-similarity, after adjusting for anisotropy (non-uniform directionality), across their CWE in GPT-2, BERT, and ELMo. (ELMo is compared in three layers due to its architecture.) \\citet{ethayarajh2019contextual} also find that upper layers of contextualizing models produce more context-specific representations. Quantifying how contextualized these dynamic embeddings are supports our findings that the highest variance in bias magnitude, low overall bias, and low self-similarity correlate. This correlation may explain the results that we are observing. As more recent models are learning highly-contextualized CWE in upper layers, the representations in highly-contextualized layers are almost overfitting to their contexts. Since words appear in numerous contexts, the more contextualized and diverse a word's representation becomes, the less overall bias and general stereotypical associations.\n\n\n\n\n\n\nWe present and validate a bias detection method generalizable to identifying biases associated with any social group or intersectional group member. We detect and measure biases associated with Mexican American and African American females in SWE and CWE.\nOur emergent intersectional bias measurement results for African American females are in line with previous findings \\citep{may2019measuring,tan2019assessing}.\nIBD and EIBD can detect intersectional biases from SWE with high accuracy in an unsupervised manner by following a lexicon induction strategy \\cite{hatzivassiloglou1997predicting}. This approach can be complementary to the stimuli list predefined by social psychologists.\nOur current intersectional bias detection validation approach can be used to identify association thresholds when generalizing this work to the entire word embedding dictionary. Exploring all the potential biases associated with targets is left to future work since it requires extensive human subject validation studies in collaboration with social psychologists. We list all the stimuli representing biased associations in the supplementary materials. To name a few, the superset of intersectional biases associated with African American females are: aggressive, assertive, athletic, bigbutt, confident, darkskinned, fried-chicken, ghetto, loud, overweight, promiscuous, unfeminine, unintelligent, unrefined. Emergent intersectional biases associated with African American females are: aggressive, assertive, bigbutt, confident, darkskinned, fried-chicken, overweight, promiscuous, unfeminine. The superset of intersectional biases associated with Mexican American females are: attractive, cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unintelligent. Emergent intersectional biases associated with Mexican American females are: cook, curvy, feisty, maids, promiscuous, sexy.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nWe follow the conventional method of using the most frequent given names in a social group that signal group membership in order to accurately represent targets \\citep{caliskan2017semantics,greenwald1998measuring}.\nOur results indicate that the conventional method that relies on stimuli selected by experts in social psychology works accurately. Prior work on lexicon induction methods compensates for the lack of existing annotated data on valence \\cite{hatzivassiloglou1997predicting, riloff2003learning, turney2003measuring}. Nevertheless, principled and robust lexicon induction methods that can be validated in this domain, when measuring the representation accuracy of target group lexica or any semantic concept. Developing these principled methods is left to future work. \n\nSemantics of languages can be represented by the distributional statistics of word co-occurrences \\cite{firth1957synopsis, harris1954distributional}. Consequently, our methods are language agnostic and can be applied to neural language models as well as word embeddings in any language as long as the stimuli for accurately representing the semantics of concepts are available. Project Implicit (\\url{https:\/\/implicit.harvard.edu\/implicit}) has been hosting IATs for human subjects all over the world in numerous languages for two decades. As a result, their IATs, that inspired WEATs, provide stimuli for targets and attributes in numerous languages. We leave generalizing our methods to other languages to future work since state-of-the-art neural language models are not widely or freely available for languages other than English as of 2021.\n\n\n\n\n\n When simulating contexts for WEAT, we make an assumption that the Reddit corpus represents naturally occurring sentences. Nevertheless, we acknowledge that the Reddit corpus also reflects the biases of the underlying population contributing to its corpus. Studying the accuracy of simulating the most common distribution of contexts and co-occurring stimuli is left to future work since we don't have validated ground truth data for evaluating the distribution parameters of contexts in large-scale corpora. Instead, for evaluation, validation, and comparison, we rely on validated ground truth information about biases documented by \\citet{caliskan2017semantics} in word embeddings as well as biases documented by millions of people over decades via the implicit association literature \\cite{nosek2002harvesting} and \\citet{ghavami2013intersectional}'s intersectional biases.\n \n \n\nGiven the energy and funding considerations, we are not able to train these language models on the same large-scale corpora to compare how a neural language model's architecture learns biases, because the training processes for these models are computationally and financially expensive \\cite{bender2021dangers}. The size of state-of-the-art models increase by at least a factor of 10 every year. BERT-Large from 2018 has 355 million parameters, GPT-2 from early 2019 reaches 1.5 billion, and GPT-3 from mid-2020 finally gets to 175 billion parameters. The GPT-2 model used 256 Google Cloud TPU v3 cores for training, which costs 256 US dollars per hour. GPT-2 requires approximately 168 hours or 1 week of training on 32 TPU v3 chips \\cite{strubell2019energy}. GPT-3 is estimated to cost $\\sim$12 million US dollars \\cite{floridi2020gpt} and we are not able to get access to its embeddings or training corpora. Regardless, measuring the scope of biases with validated bias quantification and meta-analysis methods, we are able to compare the biased associations learned by neural language models that are widely used. Being able to study neural language models comprehensively is critical since they are replacing SWE in many NLP applications due to their high accuracy in various machine learning tasks.\n \n \n\n\nWe would like to conclude the discussion with our ethical concerns regarding the dual use of IBD and EIBD, that can detect stereotypical associations for an intersectional group or disadvantaged individuals. Words retrieved by our methods may be used in the generation of offensive or stereotypical content that perpetuates or amplifies existing biases. For example, information influence operations in the 1970s used \\citet{osgood1964semantic}'s semantic differential technique among human subjects to retrieve the words that would most effectively induce a negative attitude in a South American population towards their administration \\cite{landis1982cia}. Similarly, biased neural language models may be exploited to automate large-scale information influence operations that intend to sow discord among social groups \\citet{toney2020pro, toney2020valnorm}. The biased outputs of these language models, that get recycled in future model generation's training corpora, may lead to an AI bias feedback cycle. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\n\nWe introduce methods called IBD and EIBD to identify biases associated with members of multiple minority groups. These methods automatically detect the intersectional biases and emergent intersectional biases captured by word embeddings. Intersectional biases associated with African American and Mexican American females have the highest effect size compared to other social biases. Complementary to pre-defined sets of attributes to measure widely known biases, our methods automatically discover biases.\nIBD reaches an accuracy of 81.6\\% and 82.7\\% in detection, respectively, when validating on the intersectional biases of African American females and Mexican American females.\nEIBD reaches an accuracy of 84.7\\% and 65.3\\% in detection, respectively, when validating on the emergent intersectional biases of African American females and Mexican American females.\n\n\nWe present CEAT to measure biases identified by IBD and EIBD in language models. CEAT uses a random-effects model to comprehensively measure social biases embedded in neural language models that contain a distribution of context-dependent biases. CEAT simulates this distribution by sampling ($N=10,000$) combinations of CWEs without replacement from a large-scale natural language corpus. \nUnlike prior work that focuses on a limited number of contexts defined by templates to measure the magnitude of particular biases, CEAT provides a comprehensive measurement of overall bias in contextualizing language models. Our results indicate that ELMo is the most biased, followed by BERT, and GPT. GPT-2 is the least biased language model with respect to the social biases we investigate. The overall magnitude of bias negatively correlates with the level of contextualization in the language model. Understanding how the architecture of a language model contributes to biased and contextualized word representations can help mitigate the harmful effects to society in downstream applications.\n\n\n\n\n\n\\section{Plots}\n\n\n\n\n\n\\section{Stimuli}\nThe stimuli used to represent targets and attributes in CEAT (C1-C10) are taken from Caliskan et al.\\cite{caliskan2017semantics}.\nWe construct four intersection-related CEAT for African American females and Mexican American females. \n\n\nWhen conducting intersection-related CEAT , \nwe use the names from Caliskan et al. \\cite{caliskan2017semantics} and Parada et al. \\cite{parada2016ethnolinguistic} to represent the target intersectional groups. Caliskan et al.'s WEAT provides the female and male names of African Americans and European Americans from the first Implicit Association Test in 1998 \\cite{greenwald1998measuring}. Parada et al. provide the female and male names of Mexican Americans \\cite{parada2016ethnolinguistic}. To determine and verify the gender of names, we use three gender checkers \\cite{huang2019gender}. We only use the name as a target word in our experiments, if the name is categorized to belong to the same gender by all of the three checkers. Human subjects provide the validation set of intersectional attributes with ground truth information \\cite{ghavami2013intersectional}. We use this validation set for evaluating the intersection-related CEAT, IBD and EIBD experiments.\nTo follow the order of stereotype-congruity, we use European American males as the second target group and use the attributes associated with their intersectional biases as the second attribute set in intersection-related CEAT. There are only three emergent intersectional biases associated with European American males in the validation set, which doesn't provide a sufficient number of stimuli. A small set of stimuli does not satisfy the requirements for generating statistically significant concept representation and WEATs. Related stimuli details are discussed in the dataset and stimuli sections of the main paper. In addition, if the size of the first attribute set is smaller than that of the attributes of European American males, we randomly select an equal number of attributes associated with the intersectional biases of European American males. WEAT requires equal-sized sets of attributes.\n\n\n\\subsection{CEAT I1}\nWe use the frequent given names of African American females and European American males as two target social groups and use the attributes associated with the intersectional biases of African American females and attributes associated with the intersectional biases of European American males as the two attribute groups.\n\nSince `assertive' is associated with both African American females and European American males, we do not include it in this test.\n\n\\begin{itemize}\n \\item \\textbf{African American females}: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tanisha, Yolanda, Yvette\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Intersectional biases of African American females}: aggressive, athletic, bigbutt, confident, darkskinned, fried-chicken, ghetto, loud, overweight, promiscuous, unfeminine, unintelligent, unrefined\n \\item \\textbf{Intersectional biases of European American males}: all-American, arrogant, attractive, blond, high-status, intelligent, leader, privileged, racist, rich, sexist, successful, tall\n\\end{itemize}\n\n\\subsection{CEAT I2}\nWe use the frequent given names of African American females and European American males as two target groups. We use attributes associated with emergent intersectional biases of African American females and attributes associated with intersectional biases of European American males as two attribute groups.\n\n\nSince `assertive' is associated with emergent intersectional bias of African American females and intersectional bias of European American males, we do not include it in this test.\n\n\\begin{itemize}\n \\item \\textbf{African American females}: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tanisha, Yolanda, Yvette\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Emergent intersectional biases of African American females}: aggressive, bigbutt, confident, darkskinned, fried-chicken, overweight, promiscuous, unfeminine\n \\item \\textbf{Intersectional biases of European American males}: arrogant, blond, high-status, intelligent, racist, rich, successful, tall\n\\end{itemize}\n\n\\subsection{CEAT I3}\nWe use the frequent given names of Mexican American females and European American males as the target groups and the words associated with their intersectional biases as the attribute groups.\n\nSince `attractive' is associated with intersectional biases of both Mexican American females and European American males, we do not include it in this test.\n\n\\begin{itemize}\n \\item \\textbf{Mexican American females}: Adriana, Alejandra, Alma, Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, Sonia, Yesenia\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Intersectional biases of Mexican American females}: cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unintelligent\n \\item \\textbf{Intersectional biases of European American males}: all-American, arrogant, blond, high-status, intelligent, leader, privileged, racist, rich, sexist, successful, tall\n\\end{itemize}\n\n\\subsection{CEAT I4}\nWe use the frequent given names of Mexican American females and European American males as target groups. We use words associated with the emergent intersectional biases of Mexican American females and words associated with the intersectional biases of European American males as the two attribute groups.\n\n\\begin{itemize}\n \\item \\textbf{Mexican American females}: Adriana, Alejandra, Alma, Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, Sonia, Yesenia\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Emergent intersectional biases of Mexican American females}: cook, curvy, feisty, maids, promiscuous, sexy\n \\item \\textbf{Intersectional biases of European American males}: arrogant, assertive, intelligent, rich, successful, tall\n\\end{itemize}\n\n\\subsection{IBD and EIBD}\nWe detect the attributes associated with the intersectional biases and emergent intersectional biases of African American females and Mexican American females in GloVe SWE. We assume that there are three subcategories under the race category (African American, Mexican American, European American) and two subcategories under the gender category (female, male). We use the frequent given names to represent each intersectional group. Again, we note that, in future work we'd generalize this work to $n$ subcategories under each category. Further, in future work, instead of categorizing people into social groups, we'd like to explore representing individuals in social data with continuous real-valued variables as opposed to associating them with category labels.\n\n\\begin{itemize}\n \\item \\textbf{African American females}: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tanisha, Yolanda, Yvette\n \\item \\textbf{African American males}: Alonzo, Alphonse, Hakim, Jamal, Jamel, Jerome, Leroy, Lionel, Marcellus, Terrence, Tyrone, Wardell\n \\item \\textbf{European American females}: Carrie, Colleen, Ellen, Emily, Heather, Katie, Megan, Melanie, Nancy, Rachel, Sarah,\\\\Stephanie\n \\item \\textbf{European American males}: Andrew, Brad, Frank, Geoffrey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen\n \\item \\textbf{Mexican American females}: Adriana, Alejandra, Alma, Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, Sonia, Yesenia\n \\item \\textbf{Mexican American males}: Alberto, Alejandro, Alfredo, Antonio, C\u00e9sar, Jes\u00fas, Jos\u00e9, Juan, Miguel, Pedro, Rigoberto, Rogelio\n\n\\end{itemize}\n\n\nWe list all the attributes used in validation experiments. These are stimuli associated with different social groups and a set of random words that are not expected to be associated with social groups. These random attributes are borrowed from the insect target category of the `universally accepted stereotypes' IAT, which is a baseline WEAT. We use 98 words in total in the validation experiments. \n\n\nWe also list the probability of random chance of correct classification in parentheses next to each social group. The goal here is to present the success rate of the machine learning detection method in a meaningful way. To calculate the probability of random chance of correct classification, we treat the detection task as if this were a one-vs-all verification task, which is one type of a classification task, that aims to verify the membership of a data point in the main class. All the other data points from the open world, that don't belong to the main class we are testing for, can belong to any other class. However, in reality, our detection method is a semi-supervised non-parametric prediction method which relies on the threshold $t$.\nSome of the attributes of some social groups overlap. For example, `assertive' is associated with emergent intersectional biases of African American females and also associated with emergent intersectional biases of European American males. As a result, the sum of all random chances of different groups is not 100\\%.\n\\begin{itemize}\n \\item \\textbf{Biases of Females}: attractive, caring, dependent, emotional, feminine, jealous, manipulative, materialistic, motherly, petite, soft, submissive, talkative (random chance of correct classification: 13.3\\%)\n \\item \\textbf{Biases of Males}: aggressive, ambitious, arrogant, fixer-upper, high-status, intelligent, leader, messy, provider, respected, sexist, tall, unfaithful (random chance of correct classification: 13.3\\%)\n \n \\item \\textbf{Biases of African Americans}: athletic, criminals, dangerous, gangsters, ghetto, lazy, loud, poor, tall, uneducated, unrefined, violent (random chance of correct classification: 12.2\\%)\n \\item \\textbf{Emergent Intersectional Biases of African American Females}: aggressive, assertive, bigbutt, confident, \\\\darkskinned, fried-chicken, overweight, promiscuous, unfeminine (random chance of correct classification: 9.2\\%)\n \\item \\textbf{Intersectional Biases of African American Females}: aggressive, assertive, athletic, bigbutt, confident, darkskinned, fried-chicken, ghetto, loud, overweight, promiscuous, unfeminine, unintelligent, unrefined (random chance of correct classification: 14.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of African American Males}: darkskinned, hypersexual, rapper (random chance of correct classification: 3.1\\%)\n \\item \\textbf{Intersectional Biases of African American Males}: athletic, criminals, dangerous, darkskinned, gangsters, hypersexual, lazy, loud, poor, rapper, tall, unintelligent, violent (random chance of correct classification: 13.3\\%)\n \n \n \\item \\textbf{Biases of European Americans}: all-American, arrogant, attractive, blond, blue-eyes, high-status, ignorant, intelligent, overweight, patronizing, privileged, racist, red-neck, rich, tall (random chance of correct classification: 15.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of European American Females}: ditsy (random chance of correct classification: 1.0\\%)\n \\item \\textbf{Intersectional Biases of European American Females}: arrogant, attractive, blond, ditsy, emotional, feminine, high-status, intelligent, materialistic, petite, racist, rich, submissive, tall (random chance of correct classification: 14.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of European American Males}: assertive, educated, successful (random chance of correct classification: 3.1\\%)\n \\item \\textbf{Intersectional Biases of European American Males}: all-American, arrogant, assertive, attractive, blond, educated, high-status, intelligent, leader, privileged, racist, rich, sexist, successful, tall (random chance of correct classification: 15.3\\%)\n \n\n\n \\item \\textbf{Biases of Mexican Americans}: darkskinned, day-laborer, family-oriented, gangster, hardworker, illegal-immigrant, lazy, loud, macho, overweight, poor, short, uneducated, unintelligent (random chance of correct classification: 14.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of Mexican American Females}: cook, curvy, feisty, maids, promiscuous, sexy (random chance of correct classification: 6.1\\%)\n \\item \\textbf{Intersectional Biases of Mexican American Females}: attractive, cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unintelligent (random chance of correct classification: 13.3\\%)\n \\item \\textbf{Emergent Intersectional Biases of Mexican American Males}: drunks, jealous, promiscuous, violent (random chance of correct classification: 4.1\\%)\n \\item \\textbf{Intersectional Biases of Mexican American Males}: aggressive, arrogant, darkskinned, day-laborer, drunks, hardworker, illegal-immigrant, jealous, macho, poor, promiscuous, short, uneducated, unintelligent, violent (random chance of correct classification: 15.3\\%)\n \n \\item \\textbf{Random (Insects)}: ant, bedbug, bee, beetle, blackfly, caterpillar, centipede, cockroach, cricket, dragonfly, flea, fly, gnat, hornet, horsefly, locust, maggot, mosquito, moth, roach, spider, tarantula, termite, wasp, weevil (random chance of correct classification: 25.5\\%)\n\\end{itemize}\n\n\n\n\\section{Open Source Code, Data, and Documentation}\n\\url{https:\/\/github.com\/weiguowilliam\/CEAT} is the link to our open source git repository. Code and links to datasets are available in the project repository. In addition, answers to frequently asked questions about the details of extracting the contextualized word embeddings are documented. The extracted embeddings for the stimuli take up approximately $\\sim50GB$ memory. \n\n\n\n\\subsection{Meta-Analysis Details for CEAT}\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n \nIn this section, we first construct all CEAT in the main paper (C1-C10,I1-I4) with sample size $N=1,000$ to provide a comparison of results with different sample sizes. We report CES $d$ and combined $p-value$ $p$ in Table~\\ref{table:supp-main}. We replicate these results with $N=1,000$ instead of using the original $N=10,000$ to show that even with $N=1,000$, we get valid results. Accordingly, we proceed to calculate all types of biases associated with intersectional groups based on the attributes used in original WEAT. \nWe notice that there are five tests which are significant with sample size $N=10,000$ but insignificant with sample size $N=1,000$. They are C10 with Bert, C4 with GPT, C7 with GPT-2, I3 with GPT-2 and I4 with GPT-2. We also notice that CES of same test can be different with different sample size but all differences are smaller than $0.1$.\n\n\n\\begin{table*}[t]\n\\caption{\\textbf{CEAT from main paper (C1-C10,I1-I4) with sample size $N=1,000$ as opposed to the $N=10,000$ hyper-parameter in the main paper.} We report the CES ($d$) and combined $p-values$ of all CEAT ($p$) in the main paper with sample size $N=1,000$. We observe that all of the results are consistent with the CES and $p-values$ reported in the main paper on Table 1. Light, medium, and dark gray shading of combined $d$ values (CES) indicates small, medium, and large effect size, respectively. There are five tests which are significant with sample size $N=10,000$ but not significant with sample size $N=1,000$. However, these have small effect sizes and as a result we don't expect statistical significance. According to our experiments, the Spearman correlation between WEAT's effect size and $p-value$ is $\\rho=0.99$. Smaller effect sizes are expected to have insignificant p-values. Accordingly, all of the results under $N=1,000$ are consistent with the main findings. The notable yet consistent differences are C10 with Bert, C4 with GPT, C7 with GPT-2, I3 with GPT-2, and I4 with GPT-2. CES varies minimally with different sample size ($N$), but the differences of the results are smaller than $0.1$, suggesting the degree of effect size remains consistent. In edge cases, where statistical significance or effect size is close to a significance threshold, gradually increasing $N$, in increments of $N=+500$ would provide more reliable results. $A\\_$ stands for African Americans. $E\\_$ stands for European Americans. $M\\_$ stands for Mexican Americans. $\\_F$ stands for females. $\\_M$ stands for males.\\\\}\n\\label{table:supp-main}\n \\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{@{}lcccccccc@{}}\n\\toprule\n\\textbf{Test} &\n \\multicolumn{2}{c}{\\textbf{ELMo}} &\n \\multicolumn{2}{c}{\\textbf{BERT}} &\n \\multicolumn{2}{c}{\\textbf{GPT}} &\n \\multicolumn{2}{c}{\\textbf{GPT-2}} \\\\ \\cmidrule(l){2-9} \n & $d$ & $p$ & $d$ & $p$ & $d$ & $p$ & $d$ & $p$ \\\\ \\midrule\nC1: Flowers\/Insects, P\/U$^{\\ast}$ - Attitude & \\cellcolor{darkgray}1.39 & $<10^{-30}$ & \\cellcolor{darkgray}0.96 & $<10^{-30}$ & \\cellcolor{darkgray}1.05 & $<10^{-30}$ & 0.13 & $<10^{-30}$ \\\\\nC2: Instruments\/Weapons, P\/U$^{\\ast}$ - Attitude & \\cellcolor{darkgray}1.56 & $<10^{-30}$ & \\cellcolor{darkgray}0.93 & $<10^{-30}$ & \\cellcolor{darkgray}1.13 & $<10^{-30}$ & \\cellcolor{lightgray}-0.28 & $<10^{-30}$ \\\\\nC3: EA\/AA names, P\/U$^{\\ast}$ - Attitude & \\cellcolor{lightgray}0.48 & $<10^{-30}$ &\\cellcolor{lightgray} 0.45 & $<10^{-30}$ & -0.11 & $<10^{-30}$ & \\cellcolor{lightgray}-0.20 & $<10^{-30}$ \\\\\nC4: EA\/AA names, P\/U$^{\\ast}$ - Attitude & 0.16 & $<10^{-30}$ & \\cellcolor{lightgray}0.49 & $<10^{-30}$ & 0.00 & 0.70 & \\cellcolor{lightgray}-0.23 & $<10^{-30}$ \\\\\nC5: EA\/AA names, P\/U$^{\\ast}$ - Attitude & 0.12 & $<10^{-30}$ & 0.04 & $<10^{-2}$ & 0.05 & $<10^{-4}$ & -0.17 & $<10^{-30}$ \\\\\nC6: Males\/Female names, Career\/Family & \\cellcolor{darkgray}1.28 & $<10^{-30}$ & \\cellcolor{darkgray}0.91 & $<10^{-30}$ & \\cellcolor{lightgray}0.21 & $<10^{-30}$ & \\cellcolor{lightgray}0.34 & $<10^{-30}$ \\\\\nC7: Math\/Arts, Male\/Female terms & \\cellcolor{mediumgray}0.65 & $<10^{-30}$ & \\cellcolor{lightgray}0.42 & $<10^{-30}$ & \\cellcolor{lightgray}0.23 & $<10^{-30}$ & 0.00 & 0.81 \\\\\nC8: Science\/Arts, Male\/Female terms & \\cellcolor{lightgray}0.32 & $<10^{-30}$ & -0.07 & $<10^{-4}$ & \\cellcolor{lightgray}0.26 & $<10^{-30}$ & -0.16 & $<10^{-30}$ \\\\\nC9: Mental\/Physical disease, Temporary\/Permanent & \\cellcolor{darkgray}0.99 & $<10^{-30}$ & \\cellcolor{mediumgray}0.55 & $<10^{-30}$ & 0.07 & $<10^{-2}$ & 0.04 & 0.04 \\\\\nC10: Young\/Old people's names, P\/U$^{\\ast}$ - Attitude & 0.11 & $<10^{-19}$ & 0.00 & 0.90 & 0.04 & $<10^{-2}$ & -0.17 & $<10^{-30}$ \\\\\nI1: AF\/EM, AF\/EM intersectional & \\cellcolor{darkgray}1.24 & $<10^{-30}$ & \\cellcolor{mediumgray}0.76 & $<10^{-30}$ & 0.05 & $<10^{-3}$ & 0.05 & 0.06 \\\\\nI2: AF\/EM, AF emergent\/EM intersectional & \\cellcolor{darkgray}1.24 & $<10^{-30}$ & \\cellcolor{mediumgray}0.70 & $<10^{-30}$ & -0.12 & $<10^{-30}$ & 0.03 & 0.26 \\\\\nI3: MF\/EM, MF\/EM intersectional & \\cellcolor{darkgray}1.30 & $<10^{-30}$ & \\cellcolor{mediumgray}0.69 & $<10^{-30}$ & -0.08 & $<10^{-30}$ & \\cellcolor{lightgray}0.36 & $<10^{-30}$ \\\\\nI4: MF\/EM, MF emergent\/EM intersectional &\n \\cellcolor{darkgray}1.52 &\n $<10^{-30}$ &\n \\cellcolor{darkgray}0.87 &\n $<10^{-30}$ &\n 0.14 &\n $<10^{-27}$ &\n \\cellcolor{lightgray}-0.26 &\n $<10^{-30}$ \\\\ \\bottomrule\n \\multicolumn{9}{c}{$^{\\ast}$Unpleasant and pleasant attributes used to measure valence and attitudes towards targets \\cite{greenwald1998measuring}.}\n\\end{tabular}}\n\\end{table*}\n\n\nWe also construct four types of supplementary CEAT for all pairwise combinations of six intersectional groups: African American females (AF), African American males (AM), Mexican American females (MF), Mexican American males (MM), European American females (EF), European American males (EM). We use two intersectional groups as two target social groups. For each pairwise combination, we build four CEAT : first, measure attitudes with words representing pleasantness and unpleasantness as two attribute groups (as in C1); second, measure career and family associations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C6); third, similar to the career-family stereotypes for gender, measure math and arts associations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C7); fourth, similar to the math-arts stereotypes for gender, measure science (STEM) and arts associations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C8). We report the CES ($d$) and combined $p-values$ ($p$) in Table 2 with sample size $N=1,000$. All of these attributes are from the C1, C6, C7 and C8 WEAT of Caliskan et al. \\cite{caliskan2017semantics}.\n\n\\input{supp\/longtable}\n\\input{supp\/longtable_2}\n\n\n\n\\subsection{Formal Definition of WEAT}\nWe present a formal definition of \\citet{caliskan2017semantics}'s WEAT. Let $X$ and $Y$ be two sets of target words of equal size, and $A$, $B$ be two sets of attribute words. Let $cos(\\vec{a},\\vec{b})$ stand for the cosine similarity between the embeddings of words $a$ and $b$. Here, the vector $\\vec{a}$ is the embedding for word $a$. The test statistic is \n\\[ s(X,Y,A,B) = \\sum_{x\\in X}{s(x,A,B)} - \\sum_{y\\in Y}{s(y,A,B)} \\]\nwhere \n\\[ s(w,A,B) = mean_{a \\in A}cos(\\vec{w}, \\vec{a})-mean_{b \\in B}cos(\\vec{w}, \\vec{b}) \\]\n\nA permutation test calculates the statistical significance of association $s(X,Y,A,B)$. The one-sided $p-value$ is \n\\[ P = Pr_{i} [s(X_{i},Y_{i},A,B)>s(X,Y,A,B))] \\]\nwhere $\\{(X_i,Y_i)\\}_{i}$ represents all the partitions of $X\\cup Y$ in two sets of equal size. Random permutations of these stimuli sets represent the null hypothesis as if the biased associations did not exist so that we can perform a statistical significance test by measuring the unlikelihood of the null hypothesis, given the effect size of WEAT.\n\nThe effect size of bias is calculated as \n\\[ ES = \\frac{mean_{x \\in X}s(x,A,B)-mean_{y \\in Y}s(y,A,B)}{std\\_dev_{w \\in X\\bigcup Y}s(w,A,B)} \\]\n\n\\subsection{Formal Definition of EIBD}\nWe first detect $C_{11}$'s intersectional biases $W_{IB}$ with IBD.\nThen, we detect the biased attributes associated with only one constituent category of the intersectional group $C_{11}$ (e.g., associated only with race $S_{1n}$ - or only with gender $S_{m1}$). Each intersectional category $C_{1n}$ has M constituent subcategories $S_{in},i=1,...M$ and category $C_{m1}$ has N constituent subcategories $S_{mj},j=1,...,N$.\n$S_{1n}$ and $S_{m1}$ are the constituent subcategories of intersectional group $C_{11}$.\n\nThere are in total $M+N$ groups defined by all the single constituent subcategories. We use all $M+N$ groups to build WEFAT pairs $P_i = (S_{1n},S_{in}),i=1,...,M$ and $P_j=(S_{m1},S_{mj}),j=1,...N$. Then, we detect lists of words associated with each pair $W_i,i=1,...M$ and $W_j,j=1,...,N$ based on the same positive threshold $t_{mn}$ used in IBD. We detect the attributes highly associated with the constituent subcategories $S_{1n}$ and $S_{m1}$ of the target intersectional group $C_{11}$ from all $(M+N)$ WEFAT pairs. We define the words associated with emergent intersectional biases of group $C_{11}$ as $W_{EIB}$ and these words are identified by the formula\n\\vspace{-3mm}\n\\[ W_{EIB} = (\\bigcup_{i=1}^{M} (W_{IB}-W_{i}))\n\\bigcup (\\bigcup_{j=1}^{N} (W_{IB}-W_{j})) \\]\n\\noindent where \n\\vspace{-6mm}\n\\[ W_i = \\{w|s(w,S_{1n},S_{in})>t_{mn}, w \\in W_{IB}\\}\\] \n\n\\noindent and \n\\vspace{-6mm}\n\\[ W_j= \\{w|s(w,S_{m1},S_{mj})>t_{mn}, w \\in W_{IB}\\}\\]\n\n\n\n\n\n\n\n\n\n\n\\subsection{Random-Effects Model Details}\nEach effect size is calculated by \n\\[ ES_{i} = \\frac{mean_{x \\in X}s(x,A,B)-mean_{y \\in Y}s(y,A,B)}{std\\_dev_{w \\in X\\bigcup Y}s(w,A,B)} \\]\n\n The estimation of in-sample variance is $V_{i}$, which is the square of $std\\_dev_{w \\in X\\bigcup Y}s(w,A,B)$. \n We use the same principle as estimation of the variance components in ANOVA to measure the between-sample variance $\\sigma^{2}_{between}$, which is calculated as:\n\\[\\sigma^{2}_{between}=\\left\\{\n\\begin{aligned}\n &\\frac{Q-(N-1)}{c} & if \\hspace{2mm}\\*\\* Q \\geq N-1\\\\\n &0 & if\\hspace{2mm}\\*\\* Q < N-1\n\\end{aligned}\n\\right.\n\\]\nwhere \n\\vspace{-3mm}\n\\[\nW_{i} = \\frac{1}{V_{i}}\n\\]\n\n\\vspace{-3mm}\n\n\\[c = \\sum W_{i} - \\frac{\\sum W_{i}^{2}}{\\sum W_{i}} \\hspace{2mm} \\& \\hspace{2mm} Q = \\sum W_{i} ES_{i}^{2} - \\frac{(\\sum W_{i}ES_{i})^2}{\\sum W_{i}} \\]\n\n\nThe weight $v_{i}$ assigned to each WEAT is the inverse of the sum of estimated in-sample variance $V_{i}$ and estimated between-sample variance in the distribution of random-effects $\\sigma^{2}_{between}$.\n\\[\nv_{i} = \\frac{1}{V_{i} + \\sigma^{2}_{between}}\n\\]\n\nCES, which is the sum of the weighted effect sizes divided by the sum of all weights, is then computed as\n\\[\nCES = \\frac{\\sum_{i=1}^{N}v_{i}ES_{i}}{\\sum_{i=1}^{N}v_{i}}\n\\]\n\nTo derive the hypothesis test, we calculate the standard error (SE) of CES as the square root of the inverse of the sum of the weights.\n\\[\nSE(CES) = \\sqrt{\\frac{1}{\\sum_{i=1}^{N}v_{i}}}\n\\]\nBased on the central limit theorem, the limiting form of the distribution of $\\frac{CES}{SE(CES)}$ is the standard normal distribution \\cite{montgomery2010applied}.\nSince we notice that some CES are negative, we use a two-tailed $p-value$ which can test the significance of biased associations in two directions.\nThe two-tailed $p-value$ of the hypothesis that there is no difference between all the contextualized variations of the two sets of target words in terms of their relative similarity to two sets of attribute words is given by the following formula,\n where $\\Phi$ is the standard normal cumulative distribution function and $SE$ stands for the standard error.\n \\[ P_{combined}(X,Y,A,B) = 2 \\times [1 - \\Phi ( | \\frac{CES}{SE(CES)} | ) ] \\]\n\n\\section{Data}\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzaojp b/data_all_eng_slimpj/shuffled/split2/finalzzaojp new file mode 100644 index 0000000000000000000000000000000000000000..d308798deaad54bd0f4984f54c6bfbab9f39f19d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzaojp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nDynamical chiral symmetry breaking and its partial \nrestoration in finite density systems is one of the important subjects of \nhadron physics. Recently, spectroscopy of \ndeeply bound pionic atom of Sn~\\cite{Suzuki:2002ae} and \nlow-energy pion-nucleus scattering~\\cite{Friedman:2004jh}, \nwith helps of theoretical analyses~\\cite{Kolomeitsev:2002gc}, \nhave suggested that the partial restoration does take place in nuclei \nwith order of 30\\% reduction of the quark condensate. \nThe reduction of the quark condensate in nuclear medium also leads to\nvarious phenomena, for instance, \nattractive enhancement of scalar-isoscalar $\\pi\\pi$ \ncorrelation in nuclei\nand\nthe suppression of the mass difference between the chiral partners.\nMass reduction of the $\\eta^{\\prime}$ meson\nis also induced by partial restoration of chiral symmetry~\\cite{Jido:2011pq}.\nThe experimental observations of these phenomena, such as\nthe reduction of the $N$-$N(1535)$\nmass difference in the $\\eta$ mesonic \nnuclei formation~\\cite{Jido:2002yb},\ncan be further confirmation \nof partial restoration of chiral symmetry in\nnucleus.\n\n\n\n\\section{$\\eta^{\\prime}$ mass under chiral symmetry restoration}\n\nExperimentally, a strong mass reduction of $\\eta'$ ($\\gtrsim 200$ MeV)\nhas been reported in Ref.~\\cite{Csorgo:2009pa} at RHIC. On the other\nhand, a small scattering length ($\\sim 0.1$ fm) has been suggested in\nRef.~\\cite{Moskal:2000pu} which indicates small mass reduction around 10\nMeV at normal saturation density in the linear density approximation. \nThe transparency ratio of the $\\eta^{\\prime}$ meson in nuclei\nhas suggested the absorption \nwidth of the $\\eta^{\\prime}$ \nmeson in nuclei is around 30 MeV~\\cite{NanovaTalk}. \nTheoretically, NJL model calculations suggested around 200 MeV \nmass reduction\nat the saturation \ndensity~\\cite{Costa:2002gk,Nagahiro:2006dr}. In the instanton \npicture, rapid decrease of the effects of instantons in finite energy \ndensity hadronic matter induces a reduction of the $\\eta^{\\prime}$\nmass~\\cite{Kapusta:1995ww}.\nAn effective model which is consistent to the $\\eta' p$ scattering\nlength data~\\cite{Moskal:2000pu} was also proposed\nrecently~\\cite{Oset:2010ub}. \n\nThe basic idea of the present work is that, if density dependence of\nthe U(1)$_{A}$ anomaly is moderate, a relatively\nlarge mass reduction of the $\\eta^{\\prime}$ meson is expected \nat nuclear density due to the partial restoration of chiral symmetry~\\cite{Jido:2011pq}.\nThis is based on the following symmetry argument. \nBoth the flavor single and octet pseudoscalar mesons composed of \na $\\bar q$-$q$ pair belong to the same \n$(\\bf{3},\\bf{\\bar 3})\\oplus (\\bf{\\bar 3},\\bf{3})$ \nchiral multiplet of the SU(3)$_{L}\\otimes$SU(3)$_{R}$ group. Therefore,\nwhen the SU(3)$_{L}\\otimes$SU(3)$_{R}$ chiral symmetry is manifest,\nthe flavor singlet and octet mesons should degenerate,\nno matter how the U(1)$_{A}$ anomaly effect depends on the density.\nIn other words, the chiral singlet gluonic current, which makes the \n$\\eta^{\\prime}$ mass lift up, cannot couple to the chiral pseudoscalar state \nwithout breaking chiral symmetry.\nHence, the $\\eta$ and $\\eta^{\\prime}$ mass splitting can take place \nonly with (dynamical and\/or explicit) chiral symmetry breaking, meaning that \nthe U(1)$_{A}$ anomaly effect does push the $\\eta^{\\prime}$ mass up \nbut necessarily with the chiral symmetry breaking.\nIn this way the mass splitting of the $\\eta$-$\\eta^{\\prime}$ mesons is a \nconsequence of the interplay of the U(1)$_{A}$ anomaly effect and the \nchiral symmetry breaking. \nAssuming 30\\% reduction of the quark condensate in nuclear medium, for instance,\nand that the mass difference of $\\eta$ and $\\eta^{\\prime}$ comes \nfrom the quark condensate linearly, one could expect an order of 100 MeV \nattraction for the $\\eta^{\\prime}$ meson coming from partial restoration \nof chiral symmetry in nuclear medium. \n\nThe present mechanism of the $\\eta^{\\prime}$ mass reduction in finite \ndensity has a unique feature. \nAlthough some many-body effects introduce an absorptive potential\nfor the $\\eta^{\\prime}$ meson in medium, \nthe mass reduction mechanism does not involve hadronic intermediate \nstates and, thus, the attraction dose not accompany an additional imaginary part. \nFurthermore, in the present case, since the suppression of the U(1)$_{A}$ \nanomaly effect in nuclear medium induces the attractive interaction, \nthe influence acts selectively on the $\\eta^{\\prime}$ meson and, thus, \nit does not induce inelastic transitions of the $\\eta^{\\prime}$ meson into \nlighter mesons in nuclear medium. \nConsequently \nthe $\\eta^{\\prime}$ meson bound state may have a smaller width\nthan the binding energy. \n\n\n\\section{Formation spectrum of the $\\eta^{\\prime}$ mesonic nuclei}\n\nNow we discuss the $\\eta^{\\prime}$ bound states in a nucleus \nbased on the above observation and show expected spectra\nof the $\\eta^{\\prime}$ mesonic nucleus formation in a \n$^{12}$C($\\pi^{+},p)^{11}$C$\\otimes\\eta^{\\prime}$ \nreaction~\\cite{Jido:2011pq,Nagahiro:2010zz}. \nWe perform a simple estimation of the $\\eta^{\\prime}$ \nbound states and, thus, assume a phenomenological optical potential of \nthe $\\eta^{\\prime}$ meson in nuclei as\n$\n V_{\\eta^{\\prime}}(r) = V_{0} \\rho(r)\/\\rho_{0}, \n$ \nwith the Woods-Saxon type density distribution $\\rho(r)$ for nucleus and \nthe saturation density $\\rho_0=0.17$ fm$^{-3}$. \nThe depth of the attractive potential is an order of 100 MeV at the normal nuclear \ndensity as discussed above and the absorption width is\nexpected to be less than 40 MeV~\\cite{NanovaTalk} which\ncorresponds to the 20 MeV imaginary part of the optical potential. \nThe formation spectrum is calculated in the approach developed \nin Ref.~\\cite{Jido:2002yb,Nagahiro:2005gf}\nusing the impulse approximation and the Green's function method.\n\n\\begin{figure}\n \\includegraphics[width=0.95\\linewidth]{green_fig.eps}\n\\caption{{Calculated spectra of the\n $^{12}$C($\\pi^+,p)^{11}$C$\\otimes\\eta'$ at $p_\\pi=1.8$ GeV as functions\n of the exitation energy $E_{\\rm ex}$ with (a) $V_0=-(0+20i)$ MeV, (b)\n $V_0=-(100+20i)$ MeV and (c) $V_0=-(150+20i)$ MeV. The thick solid lines\n show the total spectra, and the dominant subcomponents are labeled\n by the neutron-hole state $(n\\ell_j)_n^{-1}$ and the $\\eta'$ state $\\ell_{\\eta'}$.\n}}\n\\label{fig:spec}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:spec}, we show the\ncalculated $^{12}$C$(\\pi^+,p)^{11}$C$\\otimes\\eta'$ cross\nsections with three different potential parameters. \nIn the figure, the vertical line \nindicates \nthe $\\eta^{\\prime}$ production threshold in vacuum. \nIn the case of no attractive potential, there is no structure in the \n$\\eta^{\\prime}$-binding region but some bump in the quasi-free region. \nFinding so prominent peaks in the $\\eta^{\\prime}$-binding region\nas to be possibly observed in future experiments, we conclude that \nwith an order of 100 MeV mass reduction and a 40 MeV absorption width \nat the saturation density we have a chance to observe \nthe $\\eta^{\\prime}$-nucleus bound states in the $^{12}$C$(\\pi^{+},p)$ reaction.\nWe see also clear peaks around the $\\eta^{\\prime}$ production threshold,\nfor instance $(0p_{3\/2})_{n}^{-1}\\otimes d_{\\eta^{\\prime}}$ in plot (b)\nand $(0p_{3\/2})_{n}^{-1}\\otimes f_{\\eta^{\\prime}}$ in plot (c). They are \nnot signals of the bound states, \nhowever,\nthese are \nremnants of the bound states which could be formed if the attraction \nwould be stronger. Therefore, such peak structure also can be \nsignals of the strong attractive potential. \n\n\\section{Conclusion}\nWe point out that partial restoration of chiral symmetry in a nuclear medium \ninduces suppression of the U(1)$_{A}$ anomaly effect to the $\\eta^{\\prime}$ mass.\nConsequently, we expect a large mass reduction of the $\\eta^{\\prime}$ meson \nin nuclear matter with a relatively smaller absorption width. The mass reduction \ncould be observed as $\\eta^{\\prime}$-nucleus bound states in the formation reactions. \nThe interplay between the chiral symmetry restoration \nand the U(1)$_{A}$ anomaly effect can be a clue \nto understand the $\\eta^{\\prime}$ mass generation mechanism. Therefore,\nexperimental observations of the deeply $\\eta^{\\prime}$-nucleus bound states, or \neven confirmation of nonexistence of such deeply bound states,\nis important to solve the U(1)$_{A}$ problem.\n\n\n\n\\acknowledgements{%\nThis work was partially supported by the Grants-in-Aid for Scientific Research (No. 22740161, No. 20540273, and No. 22105510). This work was done in part under the Yukawa International Program for Quark- hadron Sciences (YIPQS).\n}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nThe separation and control of the electron spins in the two-dimensional electron gas (2DEG) has been a subject of intense investigation in the field of spintronics \\cite{Wolf2001}. \nIn the external magnetic field the focusing \nof the cyclotron trajectories can be detected in a set-up with quantum point contact (QPC) source and drain terminals \\cite{Sharvin1964, Tsoi1974, Houten1989,Hanson2003,Aidala2007,Dedigama2006, Lo2017, Yan2017,Rokhinson2004, Chesi2011, Rokhinson2006}.\nIn this work we consider the spin-dependent trajectories that could be resolved in the magnetic focusing experiment\n\\cite{Sharvin1964, Tsoi1974, Houten1989,Hanson2003,Aidala2007,\nDedigama2006, Lo2017, Yan2017,Rokhinson2004, Chesi2011, Rokhinson2006}\nby the scanning gate microscopy \\cite{Sellier2011}. \nThe focusing of electron trajectories for carriers injected across the QPC with spins \nseparated by spin-orbit interaction (SOI)\nwas considered theoretically \\cite{Usaj2004, Zulicke2007, Reynoso2007, Schliemann2008, Reynoso2008b, Kormanyos2010, Bladwell2015} and \nstudied experimentally \\cite{Rokhinson2004, Dedigama2006, Chesi2011, Lo2017, Yan2017, Yan2018}.\nThe spin separation by the strong spin-orbit interaction is achieved by splitting the magnetic focusing peaks with the orthogonal spin polarization for electrons that pass across the quantum point contacts.\nThe spin-orbit coupling alone in the absence of the external magnetic field has also been proposed\nfor the spin-separation in InGaAs QPCs \\cite{Kohda2012} and \nin U- \\cite{Zeng2012} or Y-shaped \\cite{Gupta2014} junctions of topological insulators.\n However, for strong spin-orbit coupling the electron spin precesses in the \neffective momentum-dependent spin-orbit magnetic field \\cite{Meier2007,Reynoso2008} that is oriented within the plane of confinement of the carrier gas.\nIn this work we indicate a possibility of imaging the spin-resolved electron trajectories\nfor which the electron spin is fixed and the spin-precession in the spin-orbit field is frozen\nby strong Zeeman effect due to an in-plane magnetic field. \n For that purpose instead of the spin-orbit coupling \\cite{Usaj2004, Schliemann2008, Kormanyos2010,Rokhinson2004, Dedigama2006, Chesi2011, Lo2017, Yan2017} we use an in-plane magnetic field \\cite{Watson2003,Li2012,Yan2018a} component\nthat introduces the spin-dependence of the cyclotron trajectories by the Zeeman splitting. \nWe demonstrate that for the indium antimonide -- a large Land\\'e factor material --the spin-dependent electron trajectories can be clearly resolved by the scanning gate microscopy technique. \n\nIn the focusing experiments with the 2DEG the electrons are injected and gathered by QPCs \\cite{Wharam1988, Wees1988, Wees1991}. The constrictions formed in 2DEG by electrostatic gates depleting the electron gas lead to the formation of transverse quantized modes. By applying sufficiently high potential on the gates only one or few modes can adiabatically pass through the QPC. The quantized plateaus of conductance of such constrictions have been recently reported in InSb \\cite{Qu2016}.\n\nThe scanning gate microscopy (SGM) is an experimental technique in which a charged tip of atomic force microscope is raster-scanned over a sample while measuring the conductance \\cite{Sellier2011}. The tip acts as a movable gate that can locally deplete the 2DEG, with a possible effect on the conductance. \nThe SGM technique has been used in 2DEG confined in III-V nanostructures for example to image the branching of the current trajectories in systems with QPC and the interference of electrons backscattered between the tip and the QPC \\cite{LeRoy2005, Jura2009, Paradiso2010, Brun2014}, the scarred wave functions in quantum billiards \\cite{Crook2003, Burke2010}, \nand electron cyclotron trajectories \\cite{Aidala2007, Crook2010}. It has been used for imaging the cyclotron motion also in two-dimensional materials like graphene \\cite{Morikawa2015, Bhandari2016}. \n\n\n\n\n\n\\section{Model and theory}\n \nWe consider quantum transport at the Fermi level in 2DEG confined within an InSb quantum well.\nThe model system depicted in Fig.~\\ref{system} contains two QPCs on the left-hand side, and is open on the right-hand side. The electrostatically defined two quantum point contacts are separated by a distance $L$. The terminals are numbered as indicated in Fig.~\\ref{system}. The electrons entering from the lead 1 are injected trough the first (lower) QPC into the system in a narrow beam that is steered by the transverse magnetic field. Whenever the cyclotron diameter (or its integer multiple) fits the separation $L$, electrons can enter the second QPC which serves as a collector. \nElectrons that do not get to the collector exit the system through the\nlead 3, which is used as open boundary conditions. Hard wall boundary conditions are introduced on the perpendicular edges of the computational box. The size of the computational box (width $W=2400$ nm and length 1800 nm) \nis large enough to make the effects of the scattering by the hard wall boundaries negligible for the drain (lead 2) currents. \n\n \n\\begin{figure}[tb!]\n \\includegraphics[width=\\columnwidth]{scheme_nr.pdf}\n \\caption{The scheme of the focusing system.\nThe dark blue shaded area is the gate-induced potential defining the two QPCs, separated by the distance $L$. The spin up (spin down) is parallel (antiparallel) to the total magnetic field. Due to the in-plane magnetic field (and hence Zeeman splitting) the spin-up and spin-down electrons have different momenta and get spatially separated due to difference in the cyclotron radii. The red and blue arrows correspond to spin-up and spin-down electron trajectories, respectively. The gray rectangles indicate the open boundary conditions. The terminals are numbered by integers from 1 to 3. Terminal 1 (2) is the source (drain) of the currents. Terminal 3 plays a role of an open boundary. \n } \\label{system}\n\\end{figure}\n\nFor the transport modeling, we\nassume that the vertical confinement in the InSb quantum well\nis strong enough to justify the two-dimensional approximation for\nthe electron motion. The 2D effective mass Hamiltonian reads\n\\begin{eqnarray}\nH=& \\left[\\frac{\\hbar^2}{2m_{eff}}\\mathbf{k}^2 + eV(\\mathbf{r}) \\right]\\mathbf{1} +\\frac{1}{2}\\mu_B \\boldsymbol{B}^T \\boldsymbol{g}^* \\boldsymbol{\\sigma} +H_{SO}, \n\\label{eq:dh}\n\\end{eqnarray}\nwhere $\\mathbf{k}=-i\\boldsymbol{\\nabla}-e\\mathbf{A}$, with $\\mathbf{A}$ being the vector potential, $\\mathbf{B}=(B_x,B_y,B_z)$, $\\boldsymbol{\\sigma}$ is the vector of Pauli matrices, $\\mu_B$ is the Bohr magneton, $\\mathbf{g}^*$ is the diagonal Land\\'e tensor, and $m_{eff}$ is the electron effective mass in InSb. \n\n\n \n\\begin{figure}[tb!]\n \\includegraphics[width=0.6\\columnwidth]{gates.pdf}\n \\caption{The scheme of the gates inducing the potential of the two QPCs. \nThe figure is not to scale. The values of the geometrical parameters are: $l=300$ nm, $r=500$ nm, $b_1=-600$ nm, $t_1=547$ nm, $t_2=652$ nm, $b_2=1747$ nm, $b_3=1852$ nm, and $t_3=3000$ nm.}.\n \\label{gates}\n\\end{figure}\n\n\nThe external potential as seen by the Fermi level electrons is a superposition of the QPC and the potential induced by the charged SGM tip \n\\begin{equation}\nV(\\mathbf{r}) = V_{QPC}(\\mathbf{r})+V_{tip}(\\mathbf{r}) , \n\\label{eq:Vext}\n\\end{equation}\nwhere we model the QPC using the analytical formulas developed in \\cite{Davies1995} with electrostatic potential of a finite rectangular gate given by \n\\begin{eqnarray}\n\\begin{aligned}\n V_{r}(\\mathbf{r};l,r,b,t)=\\\\\n V_g &\\left[g( x-l,y-b ) + g( x-l,t-y ) \\right. \\\\\n +& \\left. g( r-x,y-b ) +g( r-x,t-y) \\right],\n \\end{aligned}\n\\end{eqnarray}\nwhere $g(u,v) = \\tfrac{1}{2\\pi} \\arctan\\left( \\tfrac{uv}{u^2+v^2+d^2} \\right)$ with $d=50$ nm, and $V_g$ is the potential applied to the gates. The QPC potential is a superposition of potentials of three such gates\n\\begin{equation}\n V_{QPC} = V_{r}(\\mathbf{r};l,r,b_1,t_1) + V_{r}(\\mathbf{r};l,r,b_2,t_2) + V_{r}(\\mathbf{r};l,r,b_3,t_3).\n \\label{eq:qpc_gates}\n\\end{equation}\n The gates and their labeling used in Eq.~(\\ref{eq:qpc_gates}) are schematically shown in Fig.~\\ref{gates}. \nThe splitting of the gates is $d_{QPC}=105$ nm defining the QPC width. The QPCs are separated by $L=1200$ nm. \n\nFor modeling the tip potential we use a Gaussian profile\n\\begin{equation}\n V_{tip}(\\mathbf{r})= V_t \\exp \\left[ -\\frac{(x-x_{tip})^2+(y-y_{tip})^2}{d_{tip}^2} \\right],\n\\end{equation}\nwith $V_{t}$ being the maximum tip potential, $d_{tip}$ its width, and $x_{tip}$, $y_{tip}$ the coordinates of the tip. \n\n\nThe spin-orbit interactions in InSb are strong, so we include them in the calculations.\nThe two last terms in (\\ref{eq:dh}) account for the SOI with $H_{SO} =H_{R} + H_{D}$, where \n\\begin{equation}\nH_{R} = \\alpha( - k_x\\sigma_y + k_y\\sigma_x ) \n\\label{eq:HR}\n\\end{equation}\ndescribes the Rashba interaction, and \n\\begin{equation}\nH_{D} = \\beta( k_x\\sigma_x - k_y\\sigma_y ) \n\\label{eq:HD}\n\\end{equation}\nthe Dresselhaus interaction. \nFor the Hamiltonian (\\ref{eq:dh}) we use the parameters for InSb quantum well, $\\alpha=-0.051$ eV\\AA, $\\beta=0.032$ eV\\AA, $g^*_{zz}=-51$ \\cite{Gilbertson2008}, $g^*_{xx}=\\tfrac{1}{2}g^*_{zz}$ \\cite{Qu2016}, $m_{eff}=0.018m_0$ \\cite{Qu2016}.\n\n\nWe perform the transport calculations in the finite difference formalism. For evaluation of the transmission probability, we use the wave function matching (WFM) technique \\cite{Kolacha}. The transmission probability from the input lead to mode $m$ with spin $\\sigma$ in the output lead is\n\\begin{equation}\nT^m_\\sigma = \\sum_{ n,\\sigma'} |t^{mn}_{\\sigma\\sigma'}|^2,\n\\label{eq:transprob}\n\\end{equation}\nwhere $t^{mn}_{\\sigma\\sigma'}$ is the probability amplitude for the transmission from the mode $n$ with spin $\\sigma'$ in the input lead to mode $m$ with spin $\\sigma$ in the output lead. \nWe evaluate the conductance as $G={G_0}\\sum_{m, \\sigma} T^{m}_\\sigma$, with $G_0={e^2}\/{h}$.\n\n\nThe considered system presented in Fig.~\\ref{system} has the width $W=2400$ nm, and the narrow leads numbered 1 and 2 have equal width $W'=1146$ nm. The spacing between the centers of the QPCs is $L=1200$ nm. We take the gate potential $V_g=62$ meV, for which at $E_F=26$ meV\nin the absence of the external magnetic field the QPC conductance is close to $2\\tfrac{e^2}{h}$. For the SGM we use the tip parameters $V_t=260$ meV, and $d_{tip}=60$ nm.\n\n\\section{Results}\n\n\\subsection{No in-plane magnetic field}\n\n\\begin{figure}[tb!]\n \\includegraphics[width=\\columnwidth]{crossBx0.pdf}\n \\caption{The conductance from the left bottom to the left top lead $G$ as a function of magnetic field and the lower QPC conductance $G_{QPC}$. The inset shows semi-classical trajectories of the electrons for $B_z<0$, and at the three focusing peaks $B_{z}^{(i)}$ with $i$=1,2,3.\n } \\label{fig:onlyBz}\n\\end{figure}\n\nLet us first consider the transport in the system with the out-of-plane magnetic field only (i.e. $B_x=0$, $B_y=0$, $B_z \\ne 0$). \nIn Fig.~\\ref{fig:onlyBz} we present the conductance $G=G_{21}$ from the lead 1 to lead 2 as a function of the applied transverse magnetic field, and the summed conductance from the lead 1 to the leads 2 and 3, which is essentially the conductance of the lower QPC $G_{QPC}=G_{21}+G_{31}$.\nFor $B_z<0$ no focusing peaks occur because the electrons are deflected in the opposite direction than the collector, propagate along the bottom edge of the system and finally exit through the right lead. For $B_z>0$ conductance peaks almost equidistant in magnetic field appear. The first three maxima occur at $B_z^{(1)}=0.124$ T, $B_z^{(2)}=0.26$ T, $B_z^{(3)}=0.408$ T. Neglecting the SOI terms and the Zeeman term in (\\ref{eq:dh}), one obtains $|k_F|=\\sqrt{2m_{eff}E_F}=0.2148 \\tfrac{1}{\\mathrm{nm}}$. For the cyclotron diameter equal to \n\\begin{equation}\nD_c=\\frac{2\\hbar |k_F|}{|e| B_z}, \n\\label{eq:Dc}\n\\end{equation}\none obtains for the first three peaks $D_c^{(1)}=1176$ nm, $D_c^{(2)}=561$ nm, $D_c^{(3)}=358$ nm, respectively. This is close to the distance between the centers of the QPCs, $L=$1200 nm, its half , $L\/2=$600 nm, and one third, $L\/3=$400 nm, respectively. \nDespite the high spin-orbit interaction in the InSb quantum well, no spin splitting occurs. Let us denote the Fermi wave number of the subband of spin $\\sigma$ by $k_F^{\\sigma}$. For the adapted values of the SO parameters, the difference in momenta for both spins is small [see Fig.~\\ref{fig:disp2dBx}(a)]. For example for $k_{F,y}^{\\uparrow},k_{F,y}^{\\downarrow}=0$ and $E_F=26$ meV, the $x$ components extracted from the dispersion relation in Fig.~\\ref{fig:disp2dBx}(a) are $|k_{F,x}^\\uparrow|=0.11445$ nm$^{-1}$, and $|k_{F,x}^\\downarrow|=0.11142$ nm$^{-1}$, that for $D_c=1200$ nm yield transverse magnetic field $B_z^{(1)}=0.125$ T and $B_z^{(1)}=0.122$ T, respectively. \nThat is clearly too small difference to obtain a visible double peak. \n\n\n\\subsection{Enhancement of the Zeeman splitting with in-plane magnetic field}\n\n\\begin{figure}[tb!]\n \\includegraphics[width=0.49\\columnwidth]{Ek0.pdf}\n \\includegraphics[width=0.49\\columnwidth]{Ek4.pdf}\n \\caption{Dispersion relation of the 2DEG with (a) $B_x=0$ and (b) $B_x=8$ T. The color map shows the dispersion relation of the spin-down band, and the contours show the isoenergetic lines for $E_f=26$ meV for spin up (black line) and spin down (red line) electrons.\n } \\label{fig:disp2dBx}\n\\end{figure}\n\nIn the next step we apply an additional in-plane magnetic field. This leads to an increase of the Zeeman energy splitting for both spins leading to the increase of the momenta difference between both spin subbands. Fig.~\\ref{fig:disp2dBx} shows the momenta for both spins for $B_x=0$ and 8 T. Without in-plane magnetic field, the spin subbands are nearly degenerate. With $B_x$ of the order of a few tesla the difference in the momenta becomes significant. This induces a change of the cyclotron radii of the electrons with opposite spins. \n\nThe spins are oriented along the total magnetic field, $\\mathbf{B}+\\mathbf{B}_{SO}$, where $\\mathbf{B}_{SO}$ is the effective SO field. For $B_x$ of the order of a few tesla the out-of-plane magnetic field component and the SO effective field are small compared to the in-plane component. The spin is oriented nearly along the $x$ or $-x$ direction. We refer to these states as spin-up and spin-down.\n\n\\begin{figure}[tb!]\n \\includegraphics[width=\\columnwidth]{FocusBx.png}\n \\caption{Transmission as a function of $B_x$ and $B_z$. The solid (dashed) lines are the analytically calculated positions of transmission peaks maxima for spin up (down) electrons.\n } \\label{fig:transpBx}\n\\end{figure}\n\nFig.~\\ref{fig:transpBx} shows the conductance $G$ from the lead 1 to the lead 2 as a function of the in-plane (here $B_x$) and the transverse magnetic fields. For sufficiently high in-plane magnetic field the peaks split, with the splitting growing with increasing $B_x$. The lines plotted along the $n$-th pair of split peaks are calculated from the condition $B_{z,\\sigma}^{(n)}\\left(B_x\\right)=\\tfrac{2\\hbar |k_F^\\sigma|}{|e| D_c^{(n)}}$, with $|k_F^\\sigma|$ obtained from \n\\begin{equation}\n E_F = \\tfrac{(\\hbar k_F^\\sigma)^2}{ 2m_{eff} } \\pm \\tfrac{1}{2}g^*_{xx}\\mu_B B_x,\n\\end{equation}\nwhere $\\sigma=\\uparrow,\\downarrow$, the $\\pm$ sign corresponds to spin down and up, respectively, and $D_c^{(n)}$ are extracted from Fig.~\\ref{fig:onlyBz}, using Eq.~(\\ref{eq:Dc}). Although the analytical lines are obtained neglecting the SOI and the Zeeman energy contribution from the transverse magnetic field, there is a good agreement between the obtained transport results and this simplified model. \n\n\\begin{figure}[tb!]\n \\includegraphics[width=0.99\\columnwidth]{crossBx4.pdf}\n \\includegraphics[width=0.99\\columnwidth]{crossBx4b.pdf}\n \\caption{(a) The cross section of the conductance and the lower QPC conductance for $B_x=8$ T. (b) The spin resolved conductance.\n The first peak is split into two smaller peaks with $B_{z,\\downarrow}^{(1)}=0.11$ T for spin down electrons and $B_{z,\\uparrow}^{(1)}=0.137$ T for spin up electrons. \n The inset in (a) shows semi-classical trajectories of the spin-up (red semi-circles) and spin-down (blue semi-circles) electron at the first focusing peak.\n } \\label{fig:crossBx4}\n\\end{figure}\n\n\\begin{figure}[tb!]\n \\includegraphics[width=0.334\\columnwidth]{dysp0.pdf}\n \\includegraphics[width=0.283\\columnwidth]{dysp4.pdf}\n \\includegraphics[width=0.358\\columnwidth]{dysp6.pdf}\n \\caption{Dispersion relation of an infinite channel with the lateral potential taken\nat the QPC constriction with $V_r=62$ meV, and (a) $B_x=0$ T (b) $B_x=8$ T and (c) 12 T. The color map shows the mean $x$ spin component of the subband. The spin-down subband shifts up in energy upon increasing $B_x$ and finally is raised above the Fermi energy. The opposite occurs for the spin-up electrons -- for increasing $B_x$ more and more subbands are available at the Fermi level.\n } \\label{fig:dispQpcBx}\n\\end{figure}\n\n The cross section of the summed conductance and the spin-resolved conductance for $B_x=8$ T is shown in Fig.~\\ref{fig:crossBx4}. \nIn the pairs of focusing peaks, the spin-down (spin-up) conductance dominates\nfor the peak at lower (higher) magnetic field [see Fig.~\\ref{fig:crossBx4}(b)]. \nInterestingly, in each pair of the peaks in Fig.~\\ref{fig:transpBx}, the lower one has smaller transmission than the upper one, and at $B_x\\approx 10$ T vanishes, while the transmission of the upper one slowly increases. The reason for this behavior is the strong Zeeman splitting due to the in-plane magnetic field and the spin-dependent conductance of the QPCs \\cite{Potok2002,Hanson2003}. \nFig.~\\ref{fig:dispQpcBx} shows the dispersion relation of an infinite channel with the lateral potential taken\nat the QPC constriction with applied $B_x=0$, 8 T and 12 T. For $B_x=8$ T at the Fermi level for spin up 3 transverse subbands are available, while for spin down only one. For higher $B_x=12$ T the spin-down subband is raised above the Fermi level, and only spin-up electrons can pass through the QPC. On the other hand, for growing $B_x$, the number of spin-up subbands increases. Thus in the focusing spectrum, the upper peak -- the spin-up peak -- becomes more pronounced, while the lower one -- the spin-down peak -- has lower value of transmission and finally disappears.\n\n\n\\begin{figure}[tb!]\n \\includegraphics[width=0.71\\columnwidth]{Dens1InSb.pdf}\n \\includegraphics[width=0.273\\columnwidth]{sx_upInSb.pdf}\n \\caption{The density and average spins maps for the low-field peak in Fig.~\\ref{fig:crossBx4}. In (b) the average spin $x$ projection for a spin-down mode is shown, and in (c) for a spin-up mode. The spin in the $y$ and $z$ directions is negligibly small (not shown), and the average spin in the $x$ direction is preserved (cf Fig. 13 for the spin precession effects in the case where SOI dominates over the Zeeman interaction).\n } \\label{fig:densInSb}\n\\end{figure}\n\n\n\nConcluding this section, we find that the in-plane magnetic field allows for a controllable separation of the electrons with opposite spins.\nIt is worth noting that in the systems that have strong SOI, without the in-plane magnetic field, only the odd focusing peaks get split \\cite{Usaj2004, Reynoso2008, Lo2017}, and in case of the in-plane magnetic field all of the peaks are split. This is caused by the spin precession due to SOI in those systems. In our case the spin is determined by the effective magnetic field, which is almost parallel to $x$ direction. Thus the spin in $x$ direction dominates and the fluctuation due to SOI is negligible. It is shown in a representative case of the density and average spins for the low-field focusing peak at $B_{z,\\downarrow}^{(1)}=0.11$ T in Fig.~\\ref{fig:densInSb}. The electron spins are nearly unchanged along the entire path. The $\\left\\langle s_y \\right\\rangle$ and $\\left\\langle s_z \\right\\rangle$ are negligibly small compared to the $\\left\\langle s_x \\right\\rangle$. \n\n\n\\subsection{Scanning gate microscopy of the trajectories}\n\n\\begin{figure}[tb!]\n\\begin{center}\n \\includegraphics[width=0.9\\columnwidth]{Pik1.pdf}\n \\includegraphics[width=0.9\\columnwidth]{Pik1up.pdf}\n \\includegraphics[width=0.9\\columnwidth]{Pik1down.pdf}\n\\end{center}\n \\caption{The conductance maps for the spin-down (left column) and spin-up (right column) focusing peak in Fig.~\\ref{fig:crossBx4} at $B_{z,\\downarrow}^{(1)}=0.11$ T and $B_{z,\\uparrow}^{(1)}=0.137$ T, respectively. (a,b) the conductance summed over spins, (c,d) the spin-up conductance, (e,f) the spin-down conductance. The dashed semicircles show the semi-classical trajectory of an electron incident from the QPC with $k_x\\ne 0$ only. The tiny arrows in the upper right corner show which contribution of $\\Delta G$ is shown in the plot.\n } \\label{fig:sgm1}\n\\end{figure}\nWe simulated the SGM conductance maps for the magnetic fields that correspond to the peaks of magnetic focusing in the absence of the tip. We used $B_x=8$ T. In the cross section for $B_x=8$ T in Fig.~\\ref{fig:crossBx4}(a) the dots show where the SGM scans were taken. Fig.~\\ref{fig:sgm1} presents the maps of \n$\\Delta G = G\\left(\\mathbf{r}_{tip}\\right) - G\\left(B_{z,\\sigma}^{(1)}\\right)$, and the spin-resolved conductances $\\Delta G_{\\sigma'} = G_{\\sigma'}\\left(\\mathbf{r}_{tip}\\right) - G_{\\sigma'}\\left(B_{z,\\sigma}^{(1)}\\right)$ with $\\sigma,\\sigma'=\\uparrow,\\downarrow$. The conductance maps exhibit semicircular pattern with a pronounced minimum along the semi-classical orbit of a carrier incident in the $x$ direction (indicated in Fig.~\\ref{fig:sgm1} with dashed semi-circles). For the spin-up focusing peak at $B_{z,\\downarrow}^{(1)}=0.11$ T, the scan [Fig.~\\ref{fig:sgm1}(a)] is slightly different than for the spin-down peak at $B_{z,\\uparrow}^{(1)}=0.137$ T [Fig.~\\ref{fig:sgm1}(b)]. In the first one there is a slight increase of conductance to the right of the dashed semi-circle [see the red blob in Fig.~\\ref{fig:sgm1}(a)]. Fig.~\\ref{fig:sgm1}(c,d) show the spin-up conductance, and Fig.~\\ref{fig:sgm1}(e,f) the spin-down conductance as a function of the tip position. One can see that in the spin-down peak (for $B_{z,\\downarrow}^{(1)}=0.11$ T) the $\\Delta G_{\\downarrow}$ is everywhere negative or zero [Fig.~\\ref{fig:sgm1}(e)], and $\\Delta G_{\\uparrow}$ -- positive or zero almost everywhere (except within the QPC) [Fig.~\\ref{fig:sgm1}(c)]. Examples of electron densities with the tip placed in two different points are shown in Fig.~\\ref{fig:densSGM}. In Fig.~\\ref{fig:densSGM}(a), the tip, when placed along the electron trajectory leads to the deflection of the beam and blocks the spin-down beam, preventing it from entering the collector. \nOn the other hand, in Fig.~\\ref{fig:densSGM}(b), the tip can deflect the beam of spin-up electrons into the collector.\n\n\n\\begin{figure}[tb!]\n \\includegraphics[width=0.49\\columnwidth]{Dens1min.pdf}\n \\includegraphics[width=0.49\\columnwidth]{Dens1max.pdf}\n \\caption{The density maps for the tip placed in the points marked with diamonds in Fig.~\\ref{fig:sgm1}(c,e). (a) The tip blocking the beam with the tip at the point marked with green diamond in Fig.~\\ref{fig:sgm1}(c). (b) The tip enabling the spin-up beam to enter the collector with the tip at the point marked with green diamond in Fig.~\\ref{fig:sgm1}(e).\n } \\label{fig:densSGM}\n\\end{figure}\n\nThe situation is inverted in the peak at $B_z=0.137$ T. In the $\\Delta G_{\\uparrow}$ map [Fig.~\\ref{fig:sgm1}(d)], the values are smaller or equal to zero, and in the $\\Delta G_{\\downarrow}$ map [Fig.~\\ref{fig:sgm1}(f)], bigger or equal zero. In this case, the spin-up beam is blocked by the tip, thus $\\Delta G_{\\uparrow}$ drops along the semi-circle marked in Fig.~\\ref{fig:sgm1}(d). On the other hand, the spin-down electrons have a smaller cyclotron diameter [than the QPC spacing $L$], but they can be scattered by the tip to the collector, which leads to an increase of $\\Delta G$ at some points to the left of (or along) the dashed semi-circle.\n\n\n\n\\subsection{Magnetic focusing for heavy holes in GaAs\/AlGaAs heterostructure }\n\nWe consider an experiment conducted for two-dimensional hole gas (2DHG) in GaAs\/AlGaAs, in Ref.~\\onlinecite{Rokhinson2004}, where the splitting of the first focusing peak was visible without an in-plane magnetic field, and was solely due to the spin-orbit interaction. For this problem we assume the distance between the two QPCs $L=800$ nm, the computational box of width $W=1608$ nm and length $3000$ nm, the QPC defined in the same manner as in Eq.~\\ref{eq:qpc_gates} with the geometrical parameters: $l=500$ nm, $r=1100$ nm, $b_1=-600$ nm, $t_1=336$ nm, $t_2=468$ nm, $b_2=1140$ nm, $b_3=1272$ nm, $t_3=2208$ nm, and $d$=20 nm. We employ the effective mass of heavy holes $m_{eff}=0.17 m_e$ \\cite{Plaut1988}, Land\\'e factor $g_{zz}^*=-0.6$ \\cite{Arora2013},\nthe Dresselhaus SO parameter $\\beta=0.0477 $ eV{\\AA } \\cite{Rokhinson2004}, and zero Rashba SO. \n\nWe tune the lower QPC to $G_{QPC}=2e^2\/h$, with $V_g=18$ meV, and $E_F=3.2$ meV. Figure \\ref{fig:crossHole} shows the focusing conductance of the system. The focusing peaks are resolved, with the first peak split by 35 mT, remarkably close to the result in Ref.~\\onlinecite{Rokhinson2004}, with the measured splitting of 36 mT. The splitting is due to the Dresselhaus SOI, which leads to the spin-polarization in the direction dependent on the hole momentum, and the difference in the Fermi wavenumbers $k_F$ of the holes with opposite spins. The band structure in the injector QPC is shown in Fig. ~\\ref{fig:dispQpc_hole}. The hole spin in the injector QPC is in the $x$ direction.\n\n\\begin{figure}[tb!]\n \\includegraphics[width=0.99\\columnwidth]{crossBx0_holes_spins.pdf}\n \\caption{(a) The summed and spin resolved conductance for a hole system in GaAs\/AlGaAs system. \n The first peak is split into two smaller peaks with $B_{z,\\downarrow}^{(1)}=0.187$ T for spin down holes and \n $B_{z,\\uparrow}^{(1)}=0.222$ T for spin up holes. \n The peak splitting is 35 mT. \n } \\label{fig:crossHole}\n\\end{figure}\n\n\\begin{figure}[tb!]\n \\includegraphics[width=0.5\\columnwidth]{dysp0_hole.pdf}\n \\caption{Dispersion relation of an infinite channel with the lateral potential taken\nat the QPC constriction with $V_r=18$ meV, and $B_x=0$ T. The color map shows the mean $x$ spin component of the subbands. \n } \\label{fig:dispQpc_hole}\n\\end{figure}\n\nThe difference in focusing magnetic field due to SOI can be evaluated by: \n\\begin{equation}\n B_{z,\\sigma}^{(1)} = \\frac{2\\hbar k_F^{\\sigma} }{ e D_c^{(1)} } = \\frac{ \\sqrt{2m_{eff} E_F} \\mp m_{eff}\\beta\\hbar }{e D_c^{(1)} }.\n\\label{eq:dressBz}\n\\end{equation}\n\nThe density and the spin evolution in the peaks highlighted in Fig.~\\ref{fig:crossHole} by tiny triangles is shown in Fig.~\\ref{fig:densHole}. In the densities [Fig.~\\ref{fig:densHole}(a,d)] the contributions of both spins with slightly different cyclotron radii are visible. In the averaged spin $x$ component maps for the mode injected with spin up [Fig.~\\ref{fig:densHole}(b,e)] the precession is visible, but a little blurred due to the scattering from the gates' potential. For the mode injected with spin down [Fig.~\\ref{fig:densHole}(c,f)] the flip of the spin direction in the detector is clearly visible.\n\n\\begin{figure}[tb!]\n \\includegraphics[width=0.71\\columnwidth]{Dens1a.pdf}\n \\includegraphics[width=0.275\\columnwidth]{sx_a_up.pdf}\n \\includegraphics[width=0.71\\columnwidth]{Dens1b.pdf}\n \\includegraphics[width=0.275\\columnwidth]{sx_b_up.pdf}\n \\caption{The density and average spin $x$ component maps for the focusing marked peaks in Fig.~\\ref{fig:crossHole}, for the low-field peak marked with a red triangle (upper row) and high-field peak marked with a blue triangle (lower row). (a) and (b) The densities, (b) and (e) the average spin for the injected spin-up mode, and (c) and (f) for the injected spin-down mode. The flip of the spin direction in the detector QPC is visible. \n } \\label{fig:densHole}\n\\end{figure}\n\n\n\\section{Summary and Conclusions}\nWe have studied the spatial spin-splitting of the electron trajectories\n in the transverse focusing system.\nWe demonstrated that the in-plane magnetic field of a few tesla in \nInSb induces the Zeeman splitting which is large enough to \nseparate the conductance focusing peaks for the spin-down and spin-up\nFermi levels. The orientation of the spin is translated to the \nposition of the conductance peak on the magnetic field scale. \nThe focused trajectories for both the spin orientations can be resolved by\nthe scanning gate microscopy conductance maps. Moreover,\nthe SGM maps for opposite spin peaks contain qualitative differences \ndue to the spin dependence of the cyclotron radii. \nThe present finding \npaves the way for studies of the spin-dependent trajectories in the \nsystems with the two-dimensional electron gas with high Land\\'e factor materials. \n\n\\section*{Acknowledgments}\nThis work was supported by the National Science Centre (NCN) Grant No. DEC-2015\/17\/N\/ST3\/02266,\nand by AGH UST budget with the subsidy of the Ministry of\nScience and Higher Education, Poland with Grant No. 15.11.220.718\/6 for young researchers \nand Statutory Task No. 11.11.220.01\/2.\nThe calculations were performed on PL-Grid and ACK CYFRONET AGH Infrastructure. \n\n\n\n\n\n\\bibliographystyle{apsrev4-1}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nScheduling is one of the fundamental research subjects, which is\ncentral to virtually all scientific domains that require any kind of\nresource sharing. Therefore, a large body of literature exists that\nintroduces basic scheduling algorithms for various scheduling\nproblems~\\cite{Blazewicz2019,brucker2004,Drozdowski09,handbook2004,pinedo2016}.\nMost theoretical works in scheduling research use the three-field\nnotation $\\alpha|\\beta|\\gamma$ of \\citet{graham1979optimization} for\nclassifying scheduling problems. By using this notation, each\nscheduling problem can be described by the machine environment\n$\\alpha$, the job characteristics $\\beta$, and the optimality\ncriterion $\\gamma$. For example, \\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace} defines the\nproblem of scheduling jobs on identical parallel machines, where the\nmaximum completion time (\\ensuremath{C_{\\text{max}}}\\xspace) should be minimized, while no further\njob characteristics are given. One example of such job characteristics\ncould be the \\emph{moldable} job model (denoted as $any$), i.e.\\xspace, in the\nproblem\n\\Igraham{P}{any}{\\ensuremath{C_{\\text{max}}}\\xspace}~\\cite{Drozdowski09,DBLP:journals\/concurrency\/Hunold15}. In\nthe moldable model, each job can not only be executed on \\num{1}\nmachine, but it may be allotted to several machines (between \\num{1}\nand $m$ machines). The number of machines is selected by the\nscheduler, but this number of machines will not change until a job has\nbeen completed. Another example of job characteristics are job\nprocessing times that are variable and depend on environmental factors\nsuch as the position $r$ of a job in a schedule. For example, in the\n\\Igraham{P}{in-tree, $p_{j,r} = \\varphi(r)$}{\\ensuremath{C_{\\text{max}}}\\xspace} problem, the\nprocessing times of jobs with in-tree precedence constraints are\ndescribed by the $\\varphi$ function\n\\cite{Przybylski2017,Przybylski2018a}.\n\n\nSince scheduling problems are so fundamental to many scientific\ndisciplines, thousands of algorithms exist for a seemingly endless\nlist of problem variations. Among them, a significant number of\nscheduling algorithms can be described in the three-field notation.\nSeveral surveys on specific scheduling problems, e.g.\\xspace,\n\\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}, have been conducted that compare the scheduling\nperformance of different algorithms via simulations. Although these\nstudies are very informative for the readers, they are often hard to\nreproduce, as many of the building blocks are imprecisely explained\nand because the source code is often not provided or has become\ninaccessible over the years. For that reason, many algorithms cannot\nbe compared fairly or in a scientifically sound manner, as too many\ndetails are missing.\n\nTo overcome this problem, we propose \\texttt{Scheduling.jl}\\xspace, which provides a\ngeneric and open scheduling platform, on top of which a large number\nof scheduling algorithms can be implemented.\n\nIn the remainder of the article, we introduce the core functionalities\nof the \\texttt{Scheduling.jl}\\xspace package and show an example of how to use them.\n\n\\section{Overview of \\texttt{Scheduling.jl}\\xspace}\n\n\\texttt{Scheduling.jl}\\xspace provides the main building blocks for implementing scheduling\nalgorithms in their most generic form, which are \\texttt{Job}\\xspace,\n\\texttt{Machine}\\xspace, \\texttt{JobAssignment}\\xspace, and \\texttt{Schedule}\\xspace. A classical\n\\texttt{Job}\\xspace $J_j$ is defined by its processing time $p_j$ but can also be\ncharacterized by a weight $w_j$, a release date $r_j$, a due date\n$d_j$, or a deadline $\\bar{d_j}$. A \\texttt{Machine}\\xspace $M_i$ is mainly\ndefined by its speed. Of course, the sets of parameters used can be easily extended. The task of a scheduling algorithm is to find an\nassignment of jobs to machines, such that a given criterion is\noptimized. An assignment of jobs to machines defines the starting and\nthe completion time of a job $J_j$ on a machine $M_i$. The final\n\\texttt{Schedule}\\xspace is composed of a vector of jobs, a vector of machines,\nand a vector of job to machine assignments. It is worth noticing that \\texttt{Scheduling.jl}\\xspace is designed to operate on exact values (rational numbers) rather than inexact ones (floating point numbers).\n\nOnce a schedule has been obtained by executing an algorithm, the\npackage \\texttt{Scheduling.jl}\\xspace provides different optimization criteria that can be\ncomputed for a schedule, e.g., the makespan \\ensuremath{C_{\\text{max}}}\\xspace, the average\ncompletion time $\\sum_j C_j$, or the number of tardy jobs\n$\\sum_j U_j$. Additionally, the package is shipped with\nimplementations of various scheduling algorithms, in particular, for\n\\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}.\n\nIn order to obtain \\texttt{Scheduling.jl}\\xspace, one needs to install the package from\n\\url{https:\/\/github.com\/bprzybylski\/Scheduling.jl}. The stable\nversion can be installed by calling \\verb|Pkg.add(\"Scheduling\")| and\nthe development version by executing \\verb|Pkg.develop(\"Scheduling\")|.\n\nListing\\xspace~\\ref{lst:example} presents an example of how to leverage the\nbasic functionality of \\texttt{Scheduling.jl}\\xspace. Here, we create a set of jobs $J$ and\na set of machines $M$. Then, we can apply the LPT algorithm to obtain\na schedule. On this schedule, we can compute various metrics like\n\\ensuremath{C_{\\text{max}}}\\xspace or $\\sum_j C_j$.\n\n\\begin{lstlisting}[float=t,caption={Example of using the basic functionality of \\texttt{Scheduling.jl}\\xspace; applying the LPT algorithm to a small scheduling problem and reporting the \\ensuremath{C_{\\text{max}}}\\xspace and the $\\sum_j C_j$ metrics.},label={lst:example}]\nusing Scheduling\nusing Scheduling.Algorithms\nusing Scheduling.Objectives\n\n# Generate a set of jobs with processing times\nJ = Jobs([27, 19, 19, 4, 48, 38, 29])\n# Generate a set of 4 identical machines\nM = Machines(4)\n# Generate a schedule using LPT list rule\nLPT = Algorithms.lpt(J, M)\nprintln(\"Cmax = $(Int(cmax(LPT)))\")\nprintln(\"sum(C_j) = $(Int(csum(LPT)))\")\n\\end{lstlisting}\n\n\\texttt{Scheduling.jl}\\xspace also provides means to visualize the resulting schedules. For\nexample, scientists can choose to produce an image of a schedule,\nwhere it is also possible to animate the schedule creation. If\ndesired, the schedule can also be plotted as an TikZ image, which can\ndirectly be inserted into publications.\n\n\n\\section{Using \\texttt{Scheduling.jl}\\xspace: The Case of \\texttt{$P||\\ensuremath{C_{\\text{max}}}\\xspace$}}\n\nNow, we turn our attention to the NP-hard problem\n\\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}, for which \\citet{DBLP:conf\/afips\/Graham72}\ndevised two fundamental approximation algorithms, namely the LIST and\nthe LPT (Largest Processing Time) algorithm.\n\\citet{DBLP:conf\/afips\/Graham72} showed that for any instance of\n\\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}, LIST provides a $2-1\/m$ approximation, while\nLPT improves this bound to\n$4\/3-1\/(3m)$. \\citet{DBLP:journals\/jacm\/HochbaumS87} devised a PTAS\nfor this problem, by developing a dual approximation algorithm to\nsolve \\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}, which internally relies on solving a\nbin packing problem.\n\nAlthough our list is far from exhaustive, we discuss several\nheuristics that have been proposed to solve \\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}.\n\\citet{DBLP:journals\/cor\/FrancaGLM94} developed the 3-PHASE algorithm\nfor which the authors stated that it ``outperforms alternative\nheuristics'' on their respective test instances. Similarly,\n\\citet{DellAmico08} presented heuristics that are combined to compute\nexact solutions for various instances of \\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}.\nLast, \\citet{Ghalami19} presented a parallel implementation of the\nalgorithm of \\citet{DBLP:journals\/jacm\/HochbaumS87}. In each of these\nworks, the authors implemented their own interpretation of existing\nalgorithms. They also generated their own problem instances and\nreported results for a subset of the instances. Neither the\nimplementations of the algorithms nor the instances (or generators)\nare available anymore. Such a lack of code and meta-information is a\ncommon problem in many scientific fields when looking at the\nreproducibility of results. Independent researchers, who would like to\ncontinue studying heuristics for \\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}, will have to\nstart from scratch and create implementations and instances\nthemselves.\n\nWe have developed \\texttt{Scheduling.jl}\\xspace to improve the reproducibility in the\nscheduling domain. For the problem of \\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace},\n\\texttt{Scheduling.jl}\\xspace contains several algorithms that can be used to solve given\ninstances. Besides heuristics with guarantees (i.e.\\xspace, LIST and LPT),\n\\texttt{Scheduling.jl}\\xspace also contains an implementation of the algorithm of\n\\citet{DBLP:journals\/jacm\/HochbaumS87} as well as an exact algorithm\nthat was presented by \\citet{Drozdowski09}.\n\nIf independent researchers now set out to compare novel heuristics to\nalready established methods, they could simply use the source code\nprovided. Listing\\xspace~\\ref{lst:schedules} exemplifies how different\nalgorithms for one specific instance of a problem can be compared. In\nthis particular case, we created three different TiKZ files containing\nthe Gantt charts of the schedules. Figure\\xspace~\\ref{fig:pcmax_schedules}\npresents these Gantt charts produced by different algorithms when\nsolving an instance of \\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace}.\n\n\\begin{lstlisting}[float=t,caption={Comparing different algorithms for an instance of P$||\\ensuremath{C_{\\text{max}}}\\xspace$.},label={lst:schedules}]\nJ = Jobs([3, 3, 4, 5, 8, 5, 5, 7, 8, 9, 13, 8, 11, 7])\nM = Machines(4)\n\nS_LPT = Algorithms.lpt(J, M)\nS_OPT = Algorithms.P__Cmax_IP(J, M)\nS_HS = Algorithms.P__Cmax_HS(J, M; eps=1\/\/10)\nScheduling.TeX(S_LPT, \"schedule_lpt.tex\")\nScheduling.TeX(S_OPT, \"schedule_opt.tex\")\nScheduling.TeX(S_HS, \"schedule_hs.tex\")\n\\end{lstlisting}\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[t]{.7\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/fig1}\n \\subcaption{\\label{fig:alg_hs}Hochbaum\\,\\&\\,Shmoys algorithm}\n \\end{subfigure}\\\\[1ex]\n \\begin{subfigure}[t]{.7\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/fig2}\n \\subcaption{\\label{fig:alg_lpt}LPT algorithm}\n \\end{subfigure}\\\\[1ex]\n \\begin{subfigure}[t]{.7\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/fig3}\n \\subcaption{\\label{fig:alg_opt}OPT (exact)}\n \\end{subfigure}\n \\caption{\\label{fig:pcmax_schedules}Comparing schedules produced by\n different algorithms for P$||\\ensuremath{C_{\\text{max}}}\\xspace$, where Gantt charts of\n schedules have been created with \\texttt{Scheduling.jl}\\xspace.}\n\\end{figure*}\n\n\n\\section{Conclusions and Future Work}\n\nWe have introduced the Julia package \\texttt{Scheduling.jl}\\xspace, which is an effort to\nincrease the reproducibility in the scheduling community. By providing\nbasic building blocks for developing scheduling methods as well as\nseveral implementations of well-known scheduling algorithms, \\texttt{Scheduling.jl}\\xspace\ncan serve as a foundation for developing a large variation of\nscheduling algorithms. The package provides easy-to-use plotting\nfunctions to easily obtain Gantt charts of computed schedules.\n\nThe package is far from complete and it should serve as a starting\npoint for future work. So far, we have focused on providing a general\ndevelopment platform for implementing various algorithms. We made sure\nthat design choice are applicable by implementing multiple algorithms\nfor the classical problem of \\Igraham{P}{}{\\ensuremath{C_{\\text{max}}}\\xspace} ourselves. Now,\nwe hope that the community will contribute new algorithms to this\npackage. We will also continue to integrating more algorithms into\n\\texttt{Scheduling.jl}\\xspace, starting with algorithms for which Julia code already exists,\ne.g., the algorithm of \\citet{DBLP:journals\/tpds\/BleuseHKMMT17} for\n\\Igraham{(P$m$,P$k$)}{mold}{$C_{\\max}$}.\n\n\n\n\\bibliographystyle{plainnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgements}\n\\noindent MB acknowledges support by the Natural Environment Research Council (NERC) under training grant no. NE\/L002515\/1. PD acknowledges support by the Engineering and Physical Sciences Research Council (EPSRC) under grants no. EP\/M006883\/1 and EP\/N014529\/1, by the Royal Society and the Wolfson Foundation through a Royal Society Wolfson Research Merit Award no. WM130048 and by the National Science Foundation (NSF) under grant no. RNMS11-07444 (KI-Net). PD is on leave from CNRS, Institut de Math\\'ematiques de Toulouse, France. MTW acknowledges partial support from the Austrian Academy of\nSciences under the New Frontier's grant NST-001 and the EPSRC under the First Grant EP\/P01240X\/1. \n\n\\section*{AMS subject classification}\n35Q84, 35Q91, 35J15, 35J57, 91A13, 91A23, 49N70, 34C60, 37M05\n\n\\section*{Key words}\nMean field games, best reply strategy, stationary Fokker-Planck equation\n\n\\section*{Data statement: no new data were collected in the course of this research.}\n\n\\section{Abstract}\n\\noindent Mean field games (MFGs) and the best reply strategy (BRS) are two methods of describing competitive optimisation of systems of interacting agents. The latter can be interpreted as an approximation of the respective MFG system, see ~\\cite{Barker,Bertucci2019,Degond2017}. In this paper we present a systematic analysis and comparison of the two approaches in the stationary case. We provide novel existence and uniqueness results for the stationary boundary value problems related to the MFG and BRS formulations, and we present an analytical and numerical comparison of the two paradigms in a variety of modelling situations.\n\n\\section{Introduction}\n\n\\noindent Mean field games (MFGs) describe the dynamics of large interacting agent systems, in which individuals determine their optimal strategy by minimising a given cost functional. The extensive current literature is based on the original work of Lasry and Lions~\\cite{Lasry2006,Lasry2006a,Lasry2007} and Huang, Caines and Malham\\'e~\\cite{Huang2007a,Huang2006,Huang2006b}. MFGs have been used successfully in many different disciplines. A good overview is presented by Caines, Huang and Malham\\'e in~\\cite{Caines2017}, a detailed probabilistic approach by Carmona and Delarue in~\\cite{Delarue2018,Delarue2018a}.\n\nMFGs can be formulated as parabolic optimal control problems (under certain conditions on the cost). This connection can be used to construct approximations to MFGs. Degond, Liu and Ringhofer proposed a so-called best reply strategy (BRS) in~\\cite{Degond2014b}. It can be derived from the corresponding explicit in time discretisation of the respective optimal control problem, as in~\\cite{Barker,Degond2017}. More recently it has been derived in~\\cite{Bertucci2019} through considering a discounted optimal control problem and taking the discount factor to $\\infty$. Specifically, in~\\cite{Barker,Degond2017} the limit $\\Delta t\\rightarrow 0$ in the case of the following cost functional \n\\[ J^{\\Delta t}(\\alpha;m) = \\mathbb{E} \\left[ \\int_t^{t + \\Delta t} \\left(\\frac{\\alpha_s^2}{2} + \\frac{1}{\\Delta t} h(X_s,m(X_s))\\right)~ ds \\right] \\, , \\]\nis analysed. In~\\cite{Bertucci2019} the limiting behavior of MFG systems for cost functionals\n\\[ J^\\rho(\\alpha;m) = \\mathbb{E} \\left[ \\int_0^T \\left( \\frac{\\alpha_s^2}{2} + h(X_s,m(X_s)) \\right) e^{- \\rho s}~ds \\right] \\, , \\]\nas the temporal discount factor $\\rho$ tends to infinity is considered. In both cases the resulting dynamics depend instantaneously on the cost function $h$. Hence agents do not anticipate future dynamics in the respective limits, as they do in MFG approaches.\n\nAs far as the authors are aware, no systematic analysis and comparison of the two approaches has been done yet. However, the BRS is computationally less expensive and therefore more attractive in applications. Therefore it is important to understand under which circumstances the use of each model is appropriate and whether there are situations where the two models are comparable. This paper is a first step analysing the similarities and differences between the two models in a systematic way.\n\nThe existence and uniqueness of solutions to stationary MFGs has been studied extensively in previous literature ( c.f.~\\cite{Cardaliaguet2015,Ferreira2018,Gomes2017,Lasry2007}). However, apart from a small number of papers e.g.~\\cite{Cirant2015,Ferreira2019}, almost all results focus on problems posed on the torus in order avoid dealing with boundary conditions. In~\\cite{Benamou2017} the Dirichlet problem was motivated as a stopping time problem, and it was analysed in~\\cite{Ferreira2019}. In this paper we consider Neumann boundary conditions, which relate to a no-flux boundary. The only other paper we are aware of that deals with such a situation is~\\cite{Cirant2015}. In this paper the authors prove existence of solutions to the MFG problem with non-local dependence on the distribution using a Schauder fixed point argument. They then perturb the solutions to prove existence in the case of a local dependence on the distribution. Other typical methods of proof use continuation methods~\\cite{Evangelista2018,Ferreira2018,Gomes2014}, Schauder's fixed point theorem~\\cite{Boccardo2016,Cirant2016} or variational approaches through energy minimisation problems~\\cite{Cesaroni2018,Evangelista2018a}. In our proof we exploit the linear-quadratic nature of the control. This was done in the time-dependent case in~\\cite{Gueant2012} where the problem was reduced to a forward-backward system of heat equations, but we don't think our method has been considered in the stationary case. Our result sits nicely alongside the only other result for Neumann boundary conditions~\\cite{Cirant2015}. On the one hand the Hamiltonian used in~\\cite{Cirant2015} is more general than ours, however the regularity assumptions and the form of nonlinearity $h$ required in~\\cite{Cirant2015} is relaxed in our case.\n\nDue to assumption \\ref{a:hincrease}, which states that the running cost $h$ is an increasing function of density, we are in the setting of monotone stationary MFGs. Existence and uniqueness of such MFGs has been studied extensively by Gomes and collaborators in a number of papers e.g~\\cite{Evangelista2018a,Gomes2017,Gomes2016}. Although the setting in these papers focusses on domains with periodic boundary conditions, it is worth mentioning the types of techniques used and how they compare to the method in this paper. In \\cite{Gomes2017} a Hopf--Cole transformation is used to prove existence and uniqueness of minimisers of an energy functional related to a specific case of an MFG with periodic boundary conditions and a cost $h$ that is logarithmic in the density. The concepts used in our existence and uniqueness proof are similar to those used in \\cite{Gomes2017}, though we are able to generalise the density dependence and consider Neumann boundary value problems. In \\cite{Gomes2014} the results of \\cite{Gomes2017} are extended using a continuation method. There were further improved in \\cite{Ferreira2018} where a combination of a continuation method and Minty's method is used. In both cases the methods allow the authors to perturb a problem for which existence and uniqueness is known to prove existence and uniqueness of the problem of interest. The methods used there and in many subsequent works (e.g. \\cite{Ferreira2019}) rely on monotonicity properties of the operators. In our work presented here monotonicity similarly plays a central role in proving existence and uniqueness --- through both the use of the maximum principle to prove existence and uniqueness for strictly increasing functions $h$ and through the ability to uniformly perturb an increasing function into a strictly increasing function through the addition of a logarithmic congestion term.\n\nOur non-linear stationary BRS model~\\eqref{eq:brssystem} is an example of a stationary non-linear Fokker-Planck equation. Existence and uniqueness of solutions to non-linear Fokker-Planck equations have been studied extensively (see for example~\\cite{Carrillo2019,Carrillo2006} and references therein). Many results (e.g. in~\\cite{Carrillo2019,Chayes2010,Tugaut2014}) focus on non-local non-linear terms i.e. they consider Fokker-Planck equations of the form\n\\begin{equation} \\label{eq:intro-nonloc-fp}\n - \\frac{\\sigma^2}{2} \\nabla^2 m - \\nabla \\cdot (m \\nabla W*m) = 0 \\, ,\n\\end{equation}\nwith suitable boundary conditions. Here $\\nabla W*m = \\int_{\\Omega} \\nabla W(x - y) m(y)~dy$ is the usual convolution operator. For our model, we consider a local function of density $h = h(x,m(x))$, rather than a convolution term. In this case there are a number of results of existence and uniqueness of solutions to the stationary model, as well as convergence of the dynamic model to the stationary version, see e.g.~\\cite{Biane2001,Carrillo2003,Carrillo2006,McCann1997}. These papers all consider the term $h$ to be either independent of $x$ i.e. $h = h(m)$, or of the form $h = h_1(x) + h_2(m)$. So our result extends this case to more general local functions of the density. In previous literature the proof for the local case, as in~\\cite{McCann1997}, relies on a related energy functional, for which minimisers can be proven to exist and be unique. Then these minimisers are also solutions to the Fokker-Planck equation. Our result takes a different approach, one that is more closely related to the case with a convolution term, as in~\\cite{Carrillo2019}. For the non-local case~\\eqref{eq:intro-nonloc-fp} it has been frequently shown (c.f.~\\cite{Carrillo2019,Tamura1984}) that solutions of the PDE are equivalent to fixed points of a non-linear map \n\\[ T(m) = \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} W*m} \\, , \\quad \\text{where } Z = \\int_{\\Omega} e^{- \\frac{2}{\\sigma^2} W*m}~dx \\, . \\]\nWe approach existence and uniqueness of solutions to our PDE~\\eqref{eq:brssystem} in a similar vein, considering solutions to the implicit equation~\\eqref{eq:xu_brs_sys1}. While the proof in~\\cite{Carrillo2019} relies on Schauder's fixed point theorem, we are able to take advantage of $h$ being a local function of density so instead we use the implicit function theorem and intermediate value theorem to prove our result. \n\nThis paper is organised as follows. We start by briefly introducing the time-dependent MFG and BRS models in Section~\\ref{sec:dynamic_setup}, following a more detailed derivation presented in~\\cite{Barker,Degond2017}. We then describe how the dynamic problems relate to the stationary case. In Section~\\ref{sec:ex_unique_stat_sol} we present a proof of existence and uniqueness for the stationary BRS and MFG. Both proofs use similar arguments for proving existence and uniqueness, relying on the observation that both models involve a stationary Fokker-Planck equation with integral constraints. In Section~\\ref{sec:quad potential} we describe an explicit solution to the MFG and BRS model. The explicit solution allows us to analyse in which problem specific parameter ranges the solutions are compareable and in which not. In Section~\\ref{sec:numerical_sim} we illustrate the different behavior of solutions to both models with numerical simulations. Finally, in Section~\\ref{sec:conclusion} we conclude with summarising the implications of the results found, specifically what they tell us about the similarities and differences between the two models. We also briefly comment on future directions for research. \n\n\\subsection{The dynamic MFG problem and the corresponding BRS}\\label{sec:dynamic_setup}\n\nFirst we briefly review the underlying modeling assumptions of MFGs and the respective BRS models. For the ease of presentation we consider a quadratic cost on the control and restrict ourselves to the $d$-dimensional torus $\\T^d$ in the introduction. However we will consider bounded domains with Neumann boundary conditions from Section~\\ref{sec:ex_unique_stat_sol} on. Note that some of the following arguments have not been proven for such a set-up. However it is not unreasonable to assume that the following results extend naturally to the bounded domain case. \nConsider a distribution of agents which is absolutely continuous with respect to the Lebesgue measure. We denote the density of the distribution by a function $m:\\T^d \\to [0,\\infty)$. We take a representative agent, with state $X_t \\in \\T^d$ moving in this distribution according to the following SDE:\n\\[ \\begin{aligned}\n & dX_t = \\alpha_t dt + \\sigma dB_t \\\\\n & \\mathcal{L}(X_0) = m_0 \\, ,\n \\end{aligned} \\]\nwhere $\\mathcal{L}(X_0)$ denotes the law of the random variable $X_0$, $m_0$ is a given initial distribution of all agents, $\\sigma \\in (0,\\infty)$ denotes the size of idiosyncratic noise in the model and $B_t$ is a $d$-dimensional Wiener process. The function $\\alpha_t:[0,T] \\to \\T^d$ is a control chosen by the representative agent as a result of an optimisation problem, from a set of admissable controls $\\alpha \\in \\mathcal{A}$. The representative agent takes the distribution of other agents, $m$, to be given and attempts to optimise the following functional\n\\[ J(\\alpha;m) = \\mathbb{E} \\left[ \\int_0^T \\left(\\frac{\\alpha_s^2}{2} + h(X_s,m(X_s))\\right)~ds \\right] \\, . \\]\nThe functional $J$ consists of a quadratic cost for the control $\\alpha_t$ and a density dependent cost function $h:\\T^d \\times (0,\\infty) \\to \\R$ over a finite time horizon $T \\in (0,\\infty)$. Then the optimal cost trajectory $u(x,t)$ is\n\\[ u(x,t) = \\inf_{\\alpha \\in \\mathcal{A}} \\mathbb{E} \\left[ \\left. \\int_t^T \\left(\\frac{\\alpha_s^2}{2} + h(X_s,m(X_s))\\right)~ds \\right| X_t = x \\right] \\, . \\]\nThe optimal control is given in terms of $u$ by $\\alpha_t^* = - \\nabla u(X_t,t)$. The optimal cost trajectory evolves backwards in time according to\n\\begin{gather*}\n \\partial_t u = \\frac{\\left| \\nabla u \\right|^2}{2} - h(x,m) - \\frac{\\sigma^2}{2} \\nabla^2 u \\\\\n u(x,T) = 0 \\, .\n\\end{gather*}\nWe complete the model by assuming all agents act in the same way as the representative agent, and so the backward PDE is coupled to a forward Fokker-Planck PDE describing the evolution of agents. So the full MFG model is given by\n\\begin{subequations} \\label{eq:dynmfg}\n \\begin{align}\n & \\partial_t u = \\frac{\\left| \\nabla u \\right|^2}{2} - h(x,m) - \\frac{\\sigma^2}{2} \\nabla^2 u \\\\\n & \\partial_t m = \\nabla \\cdot \\left[m \\nabla u \\right] + \\frac{\\sigma^2}{2} \\nabla^2 m\\\\\n & m(x,0) = m_0 \\\\\n & u(x,T) = 0 \\, . \n \\end{align}\n\\end{subequations}\nFollowing the approach in~\\cite{Barker,Degond2017}, the corresponding BRS model arises through considering a rescaled cost functional over a short, rolling time horizon\n\\[ J^{\\Delta t}(\\alpha;m) = \\mathbb{E} \\left[ \\left. \\int_t^{t + \\Delta t} \\left( \\frac{\\alpha_s^2}{2} + \\frac{1}{\\Delta t} h(X_s,m(X_s)) \\right) ds \\right| X_t = x \\right] \\, . \\]\nGoing through a similar procedure to the MFG problem, approximating the result up to $O(\\Delta t)$ and taking the limit $\\Delta t \\to 0$, the optimal control is given by $\\alpha_t = - \\left. \\left[ \\nabla h(x,m(x)) \\right] \\right|_{x = X_t}$. Again we complete the model by assuming all agents act in the same way as the representative agent. Then the distribution of agents evolves according to the Fokker-Planck equation\n\\begin{subequations}\\label{eq:brs}\n \\begin{align}\n \\partial_t m &= \\nabla \\cdot \\left[ \\left( \\nabla h(x,m(x)) \\right) m \\right] + \\frac{\\sigma^2}{2} \\nabla^2 m\\\\\n m(x,0) &= m_0 \\, .\n \\end{align}\n\\end{subequations}In \\eqref{eq:brs} the dynamics of agents is influenced by the current agent density only. Hence the anticipation behavior, which is characteristic for MFG, is `lost'. Only the current state drives the dynamics. We shall refer to equation~\\eqref{eq:brs} as the BRS strategy in the following. \n\n\\subsection{From the dynamic problems to the stationary case} \\label{sec:stat_prob}\n\nFor the MFG the interpretation of the stationary problem is slightly subtle because in the dynamic case we are considering a problem set on a fixed time horizon so we cannot simply consider the stationary problem by setting $\\partial_t u,\\partial_t m = 0$ and interpreting it as the long-time behaviour of the dynamic case. Instead we follow the work by Cardaliaguet, Lasry, Lions and Porretta~\\cite{Cardaliaguet2012}. To highlight the dependence of the MFG on the time horizon $T$ we use the notation $\\bar{u}^T,\\bar{m}^T$ for solutions satisfying~\\eqref{eq:dynmfg}. Then we define the rescaled functions $u^T$ and $m^T$ by\n\\[ u^T(x,t) = \\bar{u}^T(x,tT) \\, , \\quad m^T(x,t) = \\bar{m}^T(x,tT) \\, . \\]\nThen Theorem 1.2 in~\\cite{Cardaliaguet2012} states that under some mild assumptions on the data we have that as $T \\to \\infty$:\n\\[ \\begin{aligned}\n u^T - \\int_{\\T^d} u^T~dy \\to u \\, , \\quad & \\text{in} \\, L^2(\\T^d \\times (0,1)) \\\\\n \\frac{1}{T} u^T \\to (1 - t) \\lambda \\, , \\quad & \\text{in} \\, L^2(\\T^d \\times (0,1)) \\\\\n m^T \\to m \\, , \\quad & \\text{in} \\, L^p(\\T^d \\times (0,1)) \\, ,\n\\end{aligned} \\]\nwhere $p$ depends on the space dimension $d$. Then the triple $(m,u,\\lambda) \\in C^2(\\T^d) \\times C^2(\\T^d) \\times \\R$ satisfies the following stationary problem\n\\begin{subequations}\\label{eq:statmfg}\n \\begin{align} \n - \\frac{\\sigma^2}{2} \\nabla^2 m - \\nabla \\cdot (m \\nabla u) &= 0 \\\\\n - \\frac{\\sigma^2}{2} \\nabla^2 u + \\frac{| \\nabla u |^2}{2} - h(x,m) + \\lambda &= 0 \\\\\n \\int_{\\T^d} m~dx &= 1 \\\\\n \\int_{\\T^d} u~dx &= 0 \\, .\n \\end{align}\n\\end{subequations}\nThe corresponding stationary BRS model is obtained by setting $\\partial_t m=0$ in~\\eqref{eq:brs}. Hence we have\n\\begin{subequations} \\label{eq:statbrs}\n \\begin{align}\n \\nabla \\cdot \\left[ \\left( \\nabla h(x,m(x)) \\right) m \\right] + \\frac{\\sigma^2}{2} \\nabla^2 m &= 0 \\\\\n \\int_{\\T^d} m~dx &= 1 \\, .\n \\end{align}\n\\end{subequations}\nEquation~\\eqref{eq:statbrs} can be understood as either the long-time behaviour of the dynamic BRS or, under suitable convexity conditions (c.f.~\\cite{McCann1997}), a competitive equilibrium distribution of the following minimisation problem\n\\begin{align}\\label{eq:Estat}\n\\min \\mathbb{E} \\left[ h(X_t,m(X_t)) \\right] \n\\end{align}\nBy competitive equilibrium we mean a stationary distribution $m$ for which $\\mathbb{E} \\left[ h(X,m(X)) \\right]$ is minimised when $\\mathcal{L}(X) = m$.\n\n\\section{Existence and Uniqueness of Stationary Solutions} \\label{sec:ex_unique_stat_sol}\nIn this section we will show that the MFG~\\eqref{eq:statmfg} and the BRS~\\eqref{eq:statbrs} admit unique solutions on a bounded domain $\\Omega$ with no flux boundary conditions. We make the following assumptions on the function ${h(x,m): \\Omega \\times (0,\\infty) \\to \\R}$ and the domain $\\Omega$:\n\\begin{enumerate}[label=(A$_\\arabic*$)]\n\\item \\label{a:omega} $\\Omega \\subset \\R^d$ is an open bounded set with a $C^{2,\\alpha}$ boundary, for some $\\alpha \\in (0,1)$ and $d \\geq 1$.\n \\item $h(x,\\cdot)$ is an increasing function for every $x \\in \\Omega$.\\label{a:hincrease}\n \\item There exists a continuous function $g:(0,\\infty) \\to [0,\\infty)$ such that $\\sup_{x \\in \\Omega} |h(x,m)| \\leq g(m)$ for every $m \\in (0,\\infty)$. \\label{a:hbound}\n\\end{enumerate}\nSince $\\Omega$ is bounded, we can now define $|\\Omega| = \\int_{\\Omega} dx$, where this integral is with respect to the standard Lebesgue measure. Furthermore, we denote the unit outer normal vector by $\\nu$. \\\\\nFor the BRS we further assume:\n\\begin{enumerate}[label=(BRS$_\\arabic*$)]\n \\item $h \\in C^2 \\left( \\Omega \\times (0,\\infty) \\right) \\cap C^1 \\left( \\bar{\\Omega} \\times(0,\\infty) \\right)$. \\label{a:brs1}\n \\item There exists a continuous function $f: (0,\\infty) \\to [0,\\infty)$ such that $\\sup_{x \\in \\Omega} | \\nabla_x h(x,m)| \\leq f(m)$ for every $m \\in (0,\\infty)$. \\label{a:brs2}\n\\end{enumerate}\nWhile for the MFG we will assume:\n\\begin{enumerate}[label=(MFG$_\\arabic*$)]\n \\item $h \\in C \\left( \\Omega \\times (0,\\infty) \\right)$ \\label{a:mfg_hreg}\n \\item $\\lim_{m \\to 0} \\sup_{x \\in \\Omega} h(x,m) < \\inf_{x \\in \\Omega} h \\left( x,\\frac{1}{|\\Omega|} \\right)$. \\label{a:mfg_mto0}\n \\item $\\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{|\\Omega|} \\right) < \\lim_{m \\to \\infty} \\inf_{x \\in \\Omega} h(x,m)$. \\label{a:mfg_mtoinf}\n\\end{enumerate}\n\n\\subsection*{Discussion of assumptions:} Since we are interested in classical solutions (generally in $C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$) the above assumptions on the cost and domain ensure sufficient regularity and boundedness of $h$. It is worth mentioning why we need $h$ to be increasing. This assumption of an increasing function can be related to ``crowd aversion''. When $h$ is increasing in $m$ then areas of high density are more highly penalised than low density areas in the optimisation problem related to the MFG and BRS (see Section~\\ref{sec:dynamic_setup}). This prevents ``accumulation points\" occurring where higher density is preferable and a Dirac delta might be introduced into the solution. As well as being a problem for regularity, the position of the Dirac deltas would be sensitive on the data and so uniqueness could not be guaranteed.\n\nAssumptions~\\ref{a:mfg_mto0} and~\\ref{a:mfg_mtoinf} are also less intuitive than the rest of the assumptions. The MFG problem has two integral constraints related to it. We prove that these constraints can be satisfied using the intermediate value theorem. In doing so we show that two functions $m_1,m_2$ exist such that $m_1(x) \\leq \\frac{1}{|\\Omega|} \\leq m_2(x)$ for every $x \\in \\Omega$. The existence of these functions is guaranteed if assumptions~\\ref{a:mfg_mto0} and~\\ref{a:mfg_mtoinf} hold. As it may not be initially clear what kind of function $h$ satisfies our requirements, some sufficient conditions if $h(x,m) = h_1(x) + h_2(m)$ are:\n\\begin{itemize}\n \\item $h_1 \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$\n \\item $h_2 \\in C^2\\left((0,\\infty)\\right)$\n \\item $h_2$ is increasing\n \\item $\\lim_{m \\to 0} h_2(m) < h_2\\left( \\frac{1}{|\\Omega|} \\right) + \\inf_{x \\in \\Omega} h_1(x)$\n \\item $\\lim_{m \\to \\infty} h_2(m) > h_2\\left( \\frac{1}{|\\Omega|} \\right) + \\sup_{x \\in \\Omega} h_1(x)$\n\\end{itemize}\n\n\\subsection{Best Reply Strategy} \\label{section:xu_brs}\nWe start by defining the notion of classical solutions we are aiming for. \n\\begin{definition}\n Let assumptions~\\ref{a:omega}--\\ref{a:hbound} and~\\ref{a:brs1}--\\ref{a:brs2} be satisfied. Then the stationary BRS boundary value problem is to find a function $m: \\Omega \\to (0,\\infty)$ satisfying\n \\begin{subequations}\\label{eq:brssystem}\n \\begin{align}\n m \\in C^2\\left(\\Omega\\right) \\cap C^1 \\left(\\bar{\\Omega}\\right)&\\\\\n - \\frac{\\sigma^2}{2} \\nabla^2 m - \\nabla \\cdot (m \\nabla [h(x,m)]) &= 0 \\, , \\quad x \\in \\Omega \\label{eq:xu_brs}\\\\ \n - \\frac{\\sigma^2}{2} \\nabla m \\cdot \\nu - m \\nabla [h(x,m)] \\cdot \\nu &= 0 \\, , \\quad x \\in \\partial \\Omega \\label{eq:xu_brs_bc} \\\\\n \\int_{\\Omega} m\\, dx &= 1 \\, . \\label{eq:xu_brs_norm}\n \\end{align}\n \\end{subequations}\n\\end{definition}\nThroughout this subsection we will assume~\\ref{a:omega}--\\ref{a:hbound} and~\\ref{a:brs1}--\\ref{a:brs2} hold.\n\n\\begin{lemma} \\label{lm:xu_brs_1}\nFor any $Z \\in (0, \\infty)$ there exists a unique $m_Z: \\Omega \\to (0,\\infty)$ such that \n\\begin{equation} \\label{eq:xu_brs_mz}\n m_Z(x) = \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2}h(x,m_Z(x))} \\, .\n\\end{equation}\nFurthermore, $m_Z \\in C^2 \\left( \\Omega \\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$.\n\\end{lemma}\n\\begin{proof}\n Fix $Z \\in (0,\\infty)$ and $x \\in \\Omega$. Consider $G_{Z,x}: (0,\\infty) \\to \\R$ given by\n \\[ G_{Z,x}(m) = \\frac{1}{Z} e^{-\\frac{2}{\\sigma^2}h(x,m)} - m \\, . \\]\n This is a strictly decreasing function of $m$. Furthermore, since $h$ is increasing and continuous with respect to $m$ and $\\sup_{x \\in \\Omega} h(x,m) \\leq g(m)$, we must have $\\lim_{m \\to 0} \\sup_{x \\in \\Omega} h(x,m) \\leq \\sup_{x \\in \\Omega} h(x,1) \\leq g(1) < \\infty$. So we get the following limit inequality as $m \\to 0$, which holds uniformly in $x$:\n \\[ \\lim_{m \\to 0} G_{Z,x}(m) > 0 \\, . \\]\n So there exists some $\\epsilon > 0$, independent of $x$, such that $G_{Z,x}(\\epsilon) > 0$. Furthermore, after defining a constant ${C = \\inf_{x \\in \\Omega} h(x,\\epsilon)}$, it is clear that\n \\[ G_{Z,x}\\left( \\frac{1}{Z} e^{-\\frac{2}{\\sigma^2}C} + \\epsilon \\right) \\leq \\frac{1}{Z} e^{-\\frac{2}{\\sigma^2} h(x,\\epsilon)} - \\frac{1}{Z} e^{-\\frac{2}{\\sigma^2}C} - \\epsilon \\leq - \\epsilon < 0 \\, .\\]\n Therefore by the intermediate value theorem and strict monotonicity of $G_{Z,x}$ there exists a unique ${m = m_Z(x) > 0}$ such that $G_{Z,x}(m_Z(x)) = 0$. Hence the first result follows. In order to show the regularity requirement that ${m_Z \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)}$, we need to show\n \\begin{enumerate}\n \\item $m_Z \\in C^2\\left(\\Omega\\right)$.\n \\item For any $x \\in \\partial \\Omega$, $\\lim_{y \\to x, \\, y \\in \\Omega} m_Z(y)$ exists.\n \\item For any $x \\in \\partial \\Omega$, $\\lim_{y \\to x, \\, y \\in \\Omega} \\nabla m_Z(y)$ exists.\n \\end{enumerate}\n The assertion that $m_Z \\in C^2(\\Omega)$ follows from the implicit function theorem. For the implicit function theorem to hold we require that $G_{Z,x}(m)$ is a $C^2$ function with respect to $x$ and $m$ at $(x,m_Z(x))$ and that $G_{Z,x}'(m_Z(x)) \\neq 0$. The first requirement is true from our assumption that $h \\in C^2\\left(\\Omega \\times (0,\\infty)\\right)$, the second requirement is true since\n \\[ G_{Z,x}'(m) = - \\frac{2}{\\sigma^2 Z} \\partial_m h(x,m) e^{- \\frac{2}{\\sigma^2} h(x,m)} - 1 \\leq -1 < 0 \\, . \\]\n To prove that $\\lim_{y \\to x, \\, y \\in \\Omega} m(y)$ exists for every $x \\in \\partial \\Omega$ it is enough to show $m_Z$ is uniformly Lipschitz in $\\Omega$. Since $m_Z \\in C^2(\\Omega)$ it is therefore enough to show $\\| \\nabla m_Z\\|_{\\infty} < \\infty$. Note that $C, \\epsilon$ defined above are independent of $x$, so we must have $ \\epsilon \\leq \\|m_Z\\|_{\\infty} \\leq \\frac{1}{Z} e^{-\\frac{2}{\\sigma^2}C} + \\epsilon < \\infty$. Then by differentiating the implicit formula for $m_Z$ we get\n \\begin{equation} \\label{eq:xu_brs_implicit_grad}\n \\nabla m_Z = \\frac{- 2 m_Z \\nabla_x h(x,m_Z)}{\\sigma^2 + 2 m_Z \\partial_m h(x,m_Z)} \\, .\n \\end{equation}\n But $\\partial_m h \\geq 0$ since $h$ is increasing, also $m_Z$ is uniformly bounded as seen above. Similarly, using $f$ from assumption~\\ref{a:brs2} we find $\\|\\nabla_x h(\\cdot,m_Z(\\cdot))\\|_{\\infty} < \\infty$, hence $\\|\\nabla m_Z\\|_{\\infty} < \\infty$.\n \n To prove the final assertion that $\\lim_{y \\to x, \\, y \\in \\Omega} \\nabla m_Z(y)$ exists for any $x \\in \\partial \\Omega$, we note the formula for $\\nabla m_Z$ is given by~\\eqref{eq:xu_brs_implicit_grad}. Since $m_Z \\in C^0\\left(\\bar{\\Omega}\\right)$, $\\|m_Z\\|_{\\infty} < \\infty$, $\\nabla_x h, \\partial_m h \\in C^0\\left(\\bar{\\Omega} \\times (0,\\infty)\\right)$ and $\\sigma^2 + 2 m_Z \\partial_m h(x,m_Z) \\geq \\sigma^2$, then the right hand side of~\\eqref{eq:xu_brs_implicit_grad} has limit as $y \\to x$ for every $x \\in \\partial \\Omega$. Hence $\\nabla m_Z$ does as well.\n\\end{proof}\n\n\\begin{definition}\nLet $m_Z$ be given by~\\eqref{eq:xu_brs_mz}. Then we define the following function $\\Phi:(0, \\infty) \\to (0,\\infty)$\n \\[ \\Phi(Z) = \\int_{\\Omega} m_Z~dx \\, . \\]\n\\end{definition}\n\n\\begin{lemma} \\label{lm:xu_brs_2}\n There exists $\\bar{Z}, \\underaccent{\\bar}{Z}$ such that $\\Phi\\left(\\bar{Z}\\right) \\geq 1$ and $\\Phi\\left(\\underaccent{\\bar}{Z}\\right) \\leq 1$.\n\\end{lemma}\n\\begin{proof}\n Take $C_1 = \\inf_{x \\in \\Omega} h \\left( x,\\frac{1}{|\\Omega|} \\right)$. So $C_1 \\in \\left[- g \\left( \\frac{1}{\\Omega} \\right),g \\left( \\frac{1}{\\Omega} \\right) \\right]$. Then, with $\\underaccent{\\bar}{Z} = |\\Omega| e^{- \\frac{2}{\\sigma^2} C_1} \\in (0,\\infty)$, we have\n \\[ G_{\\underaccent{\\bar}{Z},x} \\left( \\frac{1}{|\\Omega|} \\right) \\leq 0 \\, \\text{ for every} \\, x \\in \\Omega \\, . \\]\n Hence, $m_{\\underaccent{\\bar}{Z}}(x) \\leq \\frac{1}{|\\Omega|}$ because $G_{Z,x}$ is a strictly decreasing function. So \n \\[ \\Phi\\left(\\underaccent{\\bar}{Z}\\right) \\leq \\|m_{\\underaccent{\\bar}{Z}}\\|_{\\infty} |\\Omega| \\leq 1 \\, . \\]\n We can similarly find $\\bar{Z}$ by taking $C_2 = \\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{|\\Omega|} \\right)$. So $C_2 \\in \\left[- g \\left( \\frac{1}{\\Omega} \\right),g \\left( \\frac{1}{\\Omega} \\right) \\right]$. Then, with $\\bar{Z} = |\\Omega| e^{- \\frac{2}{\\sigma^2} C_2} \\in (0,\\infty)$, we have\n \\[ G_{\\bar{Z},x} \\left( \\frac{1}{|\\Omega|} \\right) \\geq 0 \\, \\text{ for every} \\, x \\in \\Omega \\, . \\]\n Hence, $m_{\\bar{Z}}(x) \\geq \\frac{1}{|\\Omega|}$ because $G_{Z,x}$ is a strictly decreasing function. So \n \\[ \\Phi\\left(\\bar{Z}\\right) \\geq \\|m_{\\bar{Z}}\\|_{\\infty} |\\Omega| \\geq 1 \\, . \\]\n\\end{proof}\n\n\\begin{lemma} \\label{lm:xu_brs_3}\n There exists a unique $Z^* \\in (0,\\infty)$ such that $\\Phi\\left(Z^*\\right) = 1$.\n\\end{lemma}\n\n\\begin{proof}\n If $\\Phi$ is continuous and strictly decreasing the intermediate value theorem and the Lemma~\\ref{lm:xu_brs_2} give the result. We start by proving that $\\Phi$ is strictly decreasing. First note that if $m_Z(x)$ is strictly decreasing in $Z$ for every $x$ then $\\Phi$ must be strictly decreasing because $m_Z$ is continuous with respect to $x$. Take $Z_1 < Z_2$. Then $m_{Z_1}(x)$ satisfies\n \\[ \\frac{1}{Z_1} e^{-\\frac{2}{\\sigma^2}h(x,m_{Z_1}(x))} - m_{Z_1}(x) = 0 \\, . \\]\n So\n \\[ \\frac{1}{Z_2} e^{-\\frac{2}{\\sigma^2}h(x,m_{Z_1}(x))} - m_{Z_1}(x) < \\frac{1}{Z_1} e^{-\\frac{2}{\\sigma^2}h(x,m_{Z_1}(x))} - m_{Z_1}(x) = 0 \\, . \\]\nHence $G_{Z_2,x}(m_{Z_1}(x)) < 0$. Then $m_{Z_2}(x) < m_{Z_1}(x)$ for all $x \\in \\Omega$ since $G_{Z,x}$ is a strictly decreasing function. \nTo show that $\\Phi$ is continuous at $Z \\in (0,\\infty)$, take $\\epsilon < Z' < Z$. Then\n \\[ \\begin{aligned}\n |\\Phi(Z) - \\Phi(Z')| & = \\Phi(Z') - \\Phi(Z) = \\int_{\\Omega} m_{Z'}(x)~dx - \\Phi(Z) \\\\\n & = \\frac{Z}{Z'} \\int_{\\Omega} \\frac{1}{Z} e^{- h(x,m_{Z'}(x))}~dx - \\Phi(Z) \\\\\n & \\leq \\frac{Z}{Z'} \\int_{\\Omega} \\frac{1}{Z} e^{- h(x,m_Z(x))}~dx - \\Phi(Z) \\leq \\frac{\\Phi(Z)}{\\epsilon} (Z - Z') \\leq \\frac{\\Phi(\\epsilon)}{\\epsilon} (Z - Z') \\, .\n \\end{aligned} \\]\nBy exchanging $Z$ and $Z'$ we can similarly show the analogous result for $\\epsilon < Z < Z'$, therefore $\\Phi$ is locally Lipschitz and hence continuous.\n\\end{proof}\n\\begin{theorem}\n There exists a unique solution $m: \\Omega \\to (0,\\infty)$ to the stationary BRS~\\eqref{eq:brssystem}.\n\\end{theorem}\n\\begin{proof}\n Take $m(x) = m_{Z^*}(x)$, with $Z^*$ defined as in Lemma~\\ref{lm:xu_brs_3}. Then from Lemmas~\\ref{lm:xu_brs_1} and~\\ref{lm:xu_brs_3} we have shown there exists a unique $m: \\Omega \\to (0,\\infty)$ satisfying\n \\begin{subequations} \\label{eq:xu_brs_sys1}\n \\begin{align}\n & m \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right) \\\\\n & \\text{There exists } Z \\in (0,\\infty) \\text{ such that } m = \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2}h(x,m)} \\, , \\quad x \\in \\Omega \\\\\n & \\int_{\\Omega} m~dx = 1 \\, .\n \\end{align}\n \\end{subequations}\n Now for any $m:\\Omega \\to (0,\\infty)$ we can define $\\phi(m)$ by $\\phi(m) = e^{- \\frac{2}{\\sigma^2} h(x,m)}$. Suppose $m$ is a solution to~\\eqref{eq:brssystem}, then $\\frac{m}{\\phi(m)} = m e^{\\frac{2}{\\sigma^2} h(x,m)}$ and so $\\frac{m}{\\phi(m)} \\in H^1\\left(\\Omega\\right)$ because $m \\in C^1\\left(\\bar{\\Omega}\\right)$, $h \\in C^1\\left(\\bar{\\Omega} \\times (0,\\infty)\\right)$ and $h$ is increasing in $m$. Therefore a solution to~\\eqref{eq:brssystem} is equivalent to a solution of\n \\begin{subequations} \\label{eq:xu_brs_sys2}\n \\begin{align}\n m \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)& \\label{eq:xu_brs_sys2_reg1} \\\\\n \\frac{m}{\\phi(m)} \\in H^1(\\Omega)& \\label{eq:xu_brs_sys2_reg2}\\\\\n \\nabla \\cdot \\left(\\phi(m) \\nabla \\left( \\frac{m}{\\phi(m)} \\right)\\right) &= 0 \\, , \\quad x \\in \\Omega \\label{eq:xu_brs_sys2_pde}\\\\\n \\phi(m) \\nabla \\left( \\frac{m}{\\phi(m)} \\right) \\cdot \\nu &= 0 \\, , \\quad x \\in \\partial \\Omega \\label{eq:xu_brs_sys2_bc}\\\\\n \\int_{\\Omega} m~dx &= 1 \\, .\n \\end{align}\n \\end{subequations}\n Now if we multiply~\\eqref{eq:xu_brs_sys2_pde} by $\\frac{m}{\\phi(m)}$, and integrate over $\\Omega$, then using Green's formula and the boundary condition~\\eqref{eq:xu_brs_sys2_bc} we get\n \\[ 0 = \\int_{\\Omega} \\frac{m}{\\phi(m)} \\nabla \\cdot \\left(\\phi(m) \\nabla \\left(\\frac{m}{\\phi(m)} \\right)\\right)~dx = - \\int_{\\Omega} \\phi(m) \\left| \\nabla \\left(\\frac{m}{\\phi(m)} \\right) \\right|^2~dx \\, . \\]\n But $\\phi(m) > 0$ and $\\left| \\nabla \\left(\\frac{m}{\\phi(m)} \\right) \\right|^2 \\geq 0$ for every $x \\in \\Omega$. Hence this is only true if $\\nabla \\left(\\frac{m}{\\phi(m)} \\right) = 0$ for every $x \\in \\Omega$, i.e. if there exists $Z \\in (0,\\infty)$ such that $m = \\frac{1}{Z}\\phi(m)$. Conversely, if $m \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$ and there exists $Z \\in (0,\\infty)$ such that $m = \\frac{1}{Z}\\phi(m)$, then $m$ satisfies~\\eqref{eq:xu_brs_sys2_reg1}--\\eqref{eq:xu_brs_sys2_bc}. So a solution of~\\eqref{eq:xu_brs_sys2} is equivalent to a solution of~\\eqref{eq:xu_brs_sys1}. Therefore the systems~\\eqref{eq:brssystem} and~\\eqref{eq:xu_brs_sys1} are equivalent. Hence we have shown existence and uniqueness of solutions to~\\eqref{eq:xu_brs_sys1} by proving existence and uniqueness of solutions to~\\eqref{eq:brssystem}.\n\\end{proof}\n\n\\subsection{Mean Field Games}\nNext we discuss existence and uniqueness of classical solutions to~\\eqref{eq:statmfg}, which is defined as follows.\n\\begin{definition}\n The stationary MFG boundary value problem is to find $m:\\Omega \\to (0,\\infty)$, $u:\\Omega \\to \\R$ and $\\lambda \\in \\R$ satisfying the following PDE system\n \\begin{subequations} \\label{eq:xu_mfg}\n \\begin{align}\n m \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)& \\label{eq:xu_mfg_mreg} \\\\\n u \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)& \\label{eq:xu_mfg_ureg}\\\\\n - \\frac{\\sigma^2}{2} \\nabla^2 m - \\nabla \\cdot (m \\nabla u) &= 0 \\, , \\quad x \\in \\Omega \\label{eq:xu_mfg_pde1} \\\\\n - \\frac{\\sigma^2}{2} \\nabla^2 u + \\frac{| \\nabla u |^2}{2} - h(x,m) + \\lambda &= 0 \\, , \\quad x \\in \\Omega \\label{eq:xu_mfg_pde2}\\\\\n - \\frac{\\sigma^2}{2} \\nabla m \\cdot \\nu &= 0 \\, , \\quad x \\in \\partial \\Omega \\label{eq:xu_mfg_bc1} \\\\\n - \\nabla u \\cdot \\nu &= 0 \\, , \\quad x \\in \\partial \\Omega \\label{eq:xu_mfg_bc2} \\\\\n \\int_{\\Omega} m~dx &= 1, \\label{eq:xu_mfg_ic1}\\\\\n \\int_{\\Omega} u~dx &= 0 \\, . \\label{eq:xu_mfg_ic2}\n \\end{align}\n\\end{subequations}\n\\end{definition}\n\n\\begin{remark} \\label{remark:xu_mfg_msol}\n Here, following the method of Section~\\ref{section:xu_brs}, we note that for any $u \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$, a solution $m$ of~\\eqref{eq:xu_mfg_mreg},~\\eqref{eq:xu_mfg_pde1},~\\eqref{eq:xu_mfg_bc1},~\\eqref{eq:xu_mfg_ic1} is equivalent to a solution of\n \\begin{subequations} \\label{eq:xu_mfg_mvar}\n \\begin{align}\n &m \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)\\\\\n m &= \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} u} \\, , \\quad x \\in \\Omega \\\\\n Z &= \\int_{\\Omega} e^{- \\frac{2}{\\sigma^2} u}~dx \\, . \\label{eq:xu_mfg_mvar_ic}\n \\end{align}\n \\end{subequations}\n Then by the arguments in Section~\\ref{section:xu_brs}, a unique $m$ satisfying~\\eqref{eq:xu_mfg_mvar} exists and is the unique solution to~\\eqref{eq:xu_mfg_mreg},~\\eqref{eq:xu_mfg_pde1},~\\eqref{eq:xu_mfg_bc1},~\\eqref{eq:xu_mfg_ic1}. So from now we only consider that solution.\n\\end{remark}\n\n\n\\begin{proposition} \\label{proposition:xu_mfg_transform}\n There exists a unique solution $(m,u,\\lambda) \\in \\left[ C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right) \\right] \\times \\left[ C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right) \\right] \\times \\R$ to the stationary MFG boundary value problem if and only if there exists a unique solution $(u,\\lambda,Z) \\in \\left[ C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right) \\right] \\times \\R \\times (0,\\infty)$ to\n \\begin{subequations}\\label{eq:xu_mfg_sys}\n \\begin{align}\n u \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right) \\label{eq:xu_mfg_bvp1}&\\\\\n - \\frac{\\sigma^2}{2} \\nabla^2 u + \\frac{| \\nabla u |^2}{2} - h\\left( x,\\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} u} \\right) + \\lambda &= 0 \\, , \\quad x \\in \\Omega \\label{eq:xu_mfg_bvp2} \\\\\n - \\nabla u \\cdot \\nu &= 0 \\, , \\quad x \\in \\partial \\Omega \\label{eq:xu_mfg_bvp3} \\\\ \n \\int_{\\Omega} \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} u}~dx &= 1, \\label{eq:xu_mfg_bvp4} \\\\\n \\int_{\\Omega} u~dx &= 0 \\label{eq:xu_mfg_bvp5} \\, .\n \\end{align}\n \\end{subequations}\n\\end{proposition}\n\n\\begin{proof}\n First assume a unique solution $(m,u,\\lambda)$ to~\\eqref{eq:xu_mfg} exists, then thanks to remark~\\ref{remark:xu_mfg_msol}, we have $m = \\frac{1}{Z} e^{-\\frac{2}{\\sigma^2}u}$, for $Z$ satisfying~\\eqref{eq:xu_mfg_mvar_ic}. Then the triple $(u,\\lambda,Z)$ is clearly a solution to~\\eqref{eq:xu_mfg_sys}. Furthermore, suppose another solution $(u',\\lambda',Z')$ to~\\eqref{eq:xu_mfg_sys} exists. Then $(m',u',\\lambda')$, with $m' = \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} u'}$, is a solution to~\\eqref{eq:xu_mfg}. But since we assumed such solutions are unique, we have $(m',u',\\lambda') = (m,u,\\lambda)$ and hence $(u',\\lambda',Z') = (u,\\lambda,Z)$, so the solution to~\\eqref{eq:xu_mfg_sys} is unique.\\\\\n Next we assume that a unique solution $(u,\\lambda,Z)$ to~\\eqref{eq:xu_mfg_sys} exists. Then, defining $m = \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} u}$, $(m,u,\\lambda)$ is a solution to~\\eqref{eq:xu_mfg}. Now suppose $(m',u',\\lambda')$ is another solution then (again using remark~\\ref{remark:xu_mfg_msol}) $m' = \\frac{1}{Z'} e^{- \\frac{2}{\\sigma^2} u'}$, where $Z'$ satisfies~\\eqref{eq:xu_mfg_mvar_ic}. So $(u',\\lambda',Z')$ satisfies~\\eqref{eq:xu_mfg_sys}. By uniqueness $(u',\\lambda',Z') = (u,\\lambda,Z)$ and so $(m,u,\\lambda)$ is also unique.\n\\end{proof}\n\n\\begin{theorem} \\label{thm:xu_mfg1}\n There exists a unique solution $(m,u,\\lambda)$ of the MFG system~\\eqref{eq:xu_mfg}.\n\\end{theorem}\n\n\\begin{proof}[Proof (outline)]\n First note, as a result of Proposition~\\ref{proposition:xu_mfg_transform}, we only need to prove existence and uniqueness of a solution to~\\eqref{eq:xu_mfg_sys} and existence and uniqueness for the MFG system~\\eqref{eq:xu_mfg} will follow. The proof is split into the following steps:\n \\begin{enumerate}\n \\item Show that for any pair of constants $(\\lambda,Z)$ there exists a unique solution, denoted by $u_{\\lambda,Z}$, to~\\eqref{eq:xu_mfg_bvp1},~\\eqref{eq:xu_mfg_bvp2} (see Proposition \\ref{prop:xu_mfg_cont})\n \\item Show that for any constant $Z$ there exists a unique $\\lambda = \\lambda(Z)$ such that $u_{\\lambda(Z),Z}$ satisfies~\\eqref{eq:xu_mfg_bvp4} (see Proposition \\ref{prop:xu_mfg_lambda})\n \\item Show that there exists a unique $Z = Z^*$ such that $u_{\\lambda(Z^*),Z^*}$ satisfies~\\eqref{eq:xu_mfg_bvp5}.\n \\end{enumerate}\n \n Then $\\left(u_{\\lambda\\left(Z^*\\right),Z^*},\\lambda\\left(Z^*\\right),Z^*\\right)$ is a solution to~\\eqref{eq:xu_mfg_sys}. Uniqueness follows from uniqueness obtained at each step of the proof outlined. We prove step 1 using a variant of the method of upper and lower solutions in the spirit of~\\cite{Schmitt1978}, so that it applies to our case of Neumann boundary conditions. We prove steps 2 and 3 by iteratively using the intermediate value theorem - in a similar manner to the proof of Lemma~\\ref{lm:xu_brs_3}. Note that we will first do this for $h$ which is strictly increasing in $m$. Then by considering ${h_{\\epsilon}(x,m) = h(x,m) + \\epsilon \\log\\left(|\\Omega| m\\right)}$, and taking the limit as $\\epsilon \\to 0$ we will prove it in the more general setting when $h$ is increasing.\n\\end{proof}\n\n\\begin{lemma} \\label{lm:xu_mfg_constbound}\n There exists $\\Lambda_1, \\Lambda_2 \\in [- \\infty, \\infty]$ with $\\Lambda_1 < \\Lambda_2$ such that for every $\\lambda \\in \\left( \\Lambda_1, \\Lambda_2 \\right)$ and $Z > 0$, there exist two constants $\\underaccent{\\bar}{u}_{\\lambda,Z} \\leq 0 \\leq \\bar{u}_{\\lambda,Z}$ satisfying \n \\begin{equation} \\label{eq:xu_mfg_constbound}\n - h \\left(x, \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} \\underaccent{\\bar}{u}_{\\lambda,Z}} \\right) + \\lambda \\leq 0 \\leq - h \\left(x, \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} \\bar{u}_{\\lambda,Z}} \\right) + \\lambda \\, .\n \\end{equation}\n\\end{lemma}\n\n\\begin{proof}\n Take $\\Lambda_1 = \\lim_{m \\to 0} \\sup_{x \\in \\Omega} h(x,m)$ and $\\Lambda_2 = \\lim_{m \\to \\infty} \\inf_{x \\in \\Omega} h(x,m)$. First $\\Lambda_1 < \\Lambda_2$ by combining assumptions~\\ref{a:mfg_mto0} and~\\ref{a:mfg_mtoinf}. Then, since $h$ is continuous and increasing in $m$, for any $\\lambda \\in \\left( \\Lambda_1, \\Lambda_2 \\right)$ there exists $M_{\\lambda}^1, M_{\\lambda}^2 \\in (0,\\infty)$ such that $h(x,m) \\leq \\lambda$ for all $(x,m) \\in \\Omega \\times \\left(0,M_{\\lambda}^1\\right]$, and similarly $h(x,m) \\geq \\lambda$ for all $(x,m) \\in \\Omega \\times \\left[M_{\\lambda}^2,\\infty\\right)$. We define the upper and lower constants for $\\lambda \\in \\left( \\Lambda_1, \\Lambda_2 \\right)$ as \n \\[ \\begin{aligned}\n \\bar{u}_{\\lambda,Z} &= \\max \\left( - \\frac{\\sigma^2}{2} \\log Z M_{\\lambda}^1, 0 \\right) \\\\\n \\underaccent{\\bar}{u}_{\\lambda,Z} &= \\min \\left( - \\frac{\\sigma^2}{2} \\log Z M_{\\lambda}^2, 0 \\right) \\, .\n \\end{aligned} \\]\n Then clearly \n \\[ - h \\left( x, \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2}\\bar{u}_{\\lambda,Z}} \\right) + \\lambda = - h \\left( x,\\min \\left(M_{\\lambda}^1, 1 \\right) \\right) + \\lambda \\geq - h \\left( x,M_{\\lambda}^1 \\right) + \\lambda \\geq 0 \\, , \\]\n while the reverse inequality is true for $\\underaccent{\\bar}{u}_{\\lambda,Z}$. Hence $\\bar{u}_{\\lambda,Z}, \\underaccent{\\bar}{u}_{\\lambda,Z}$ are the required upper and lower constants.\n\\end{proof}\n\n\\begin{proposition} \\label{prop:xu_mfg_1}\n Define $C^{2,\\tau}\\left(\\bar{\\Omega}\\right)$ as the set of functions $u \\in C^2\\left(\\bar{\\Omega}\\right)$ whose second partial derivatives are all H\\\"older continuous with exponent $\\tau$ on $\\bar{\\Omega}$. Assume $h$ is strictly increasing with respect to $m$. Then, for every $\\lambda \\in (\\Lambda_1, \\Lambda_2)$ and every $Z \\in (0,\\infty)$ there exists a unique function, $u_{\\lambda,Z} \\in C^{2,\\tau}\\left(\\bar{\\Omega}\\right) \\subset C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$ for some $\\tau \\in (0,1)$, which satisfies~\\eqref{eq:xu_mfg_bvp1}--\\eqref{eq:xu_mfg_bvp3}. Furthermore, $\\underaccent{\\bar}{u}_{\\lambda,Z} \\leq u_{\\lambda,Z} \\leq \\bar{u}_{\\lambda,Z}$.\n\\end{proposition}\n\n\\begin{proof}\n Existence is an application of Corollary 2.9 in~\\cite{Schmitt1978}, which states that a solution ${u_{\\lambda,Z} \\in C^{2,\\tau}\\left(\\bar{\\Omega}\\right)}$ to~\\eqref{eq:xu_mfg_bvp1}--\\eqref{eq:xu_mfg_bvp3} exists provided the following properties hold:\n \n \\begin{enumerate}\n \\item There exist constants $\\underaccent{\\bar}{u}_{\\lambda,Z} \\leq 0 \\leq \\bar{u}_{\\lambda,Z}$ satisfying~\\eqref{eq:xu_mfg_constbound} for every $x \\in \\bar{\\Omega}$.\n \\item There exists a continuous function $f:[0,\\infty) \\to [0,\\infty)$ such that the following inequality holds for every $(x,u,p) \\in \\bar{\\Omega} \\times \\R \\times \\R^d$\n \\[ \\left| \\frac{|p|^2}{2} - h \\left( x,\\frac{1}{Z} e^{- \\frac{2}{\\sigma^2}u} \\right) + \\lambda \\right| \\leq f(|u|) \\left( 1 + |p|^2 \\right) \\, . \\]\n \\end{enumerate}\n \n Property 1 is true from Lemma~\\ref{lm:xu_mfg_constbound}. Property 2 can be shown to be true by taking\n \\[ f(u) = \\max \\left( \\frac{1}{2}, |\\lambda| + \\max \\left[ g \\left( \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2}u} \\right), g \\left( \\frac{1}{Z} e^{\\frac{2}{\\sigma^2}u} \\right) \\right] \\right) \\, , \\]\n where $g$ is defined in assumption~\\ref{a:hbound}.\n \n We prove uniqueness using the strong maximum principle and Hopf's Lemma as stated in~\\cite{Evans1998}~(Section~6.4.2.~pp.~330--333). Suppose there are two solutions, $u_1,u_2 \\in C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$ to~\\eqref{eq:xu_mfg_bvp1},~\\eqref{eq:xu_mfg_bvp2}. Define $a = \\nabla (u_1 + u_2)$. Then $a \\in L^{\\infty}\\left(\\bar{\\Omega}\\right)$. Now suppose $u_1 \\neq u_2$ and define $v = u_1 - u_2$. Then $v$ must attain its maximum at some point $\\bar{x} \\in \\bar{\\Omega}$. First suppose $\\bar{x} \\in \\Omega$. Then there exists an open bounded and connected region $V$ such that $V \\subset \\Omega$, $\\bar{x} \\in V$ and $v > 0$ for all $x \\in V$. Hence, since $h(x,\\cdot)$ is increasing, we have\n \\[ - \\frac{\\sigma^2}{2} \\nabla^2 v + \\frac{1}{2} a \\cdot \\nabla v \\leq 0 \\, , \\quad \\text{for every} \\, x \\in V \\, . \\]\n So by the strong maximum principle $v$ is constant in $V$. Therefore we must have\n \\[ h \\left( x,\\frac{1}{Z} e^{- \\frac{2}{\\sigma^2}u_1(x)} \\right) = h \\left( x,\\frac{1}{Z} e^{- \\frac{2}{\\sigma^2}u_2(x)} \\right) \\quad \\text{for every} \\, x \\in V \\, . \\]\n So $u_1 = u_2$ in $V$ because $h$ is strictly increasing, which leads to a contradiction. Therefore the only other option is that $\\bar{x} \\in \\partial \\Omega$ and $v(x) < v(\\bar{x})$ for every $x \\in \\Omega$. Hence by Hopf's Lemma (which we can use because $\\partial \\Omega$ is $C^2$) $\\left. \\frac{\\partial v}{\\partial \\nu} \\right|_{\\bar{x}} > 0$, but by the boundary condition~\\eqref{eq:xu_mfg_bvp3}, $\\frac{\\partial v}{\\partial \\nu} = \\frac{\\partial u_1}{\\partial \\nu} - \\frac{\\partial u_2}{\\partial \\nu} = 0$. This again leads to a contradiction. Therefore $u_1 = u_2$, and therefore the solution is unique.\n\\end{proof}\n\n\\begin{remark} \\label{rmk:XU_mfg_r2}\n It should be noted that the same method to prove uniqueness can be used to prove that $u_{\\lambda_1,Z} \\geq u_{\\lambda_2,Z}$ for all $\\lambda_1 \\leq \\lambda_2$\n\\end{remark}\n\n\\begin{lemma} \\label{lm:xu_mfg_monotone}\n Assume $h$ is strictly increasing with respect to $m$. Then for every $x \\in \\Omega$, $u_{\\lambda,Z}(x)$ is decreasing with respect to $\\lambda$ and $Z$.\n\\end{lemma}\n\n\\begin{proof}\n In Light of remark~\\ref{rmk:XU_mfg_r2} we need only to prove $u_{\\lambda,Z_1} \\geq u_{\\lambda,Z_2}$ for all $Z_1 \\leq Z_2$. However, by substitution we find that $u = u_{\\lambda,Z_1} - \\frac{\\sigma^2}{2} \\log \\frac{Z_2}{Z_1}$ satisfies~\\eqref{eq:xu_mfg_bvp1}--\\eqref{eq:xu_mfg_bvp3} with $Z = Z_2$. So by uniqueness of solutions to this PDE proved in Proposition~\\ref{prop:xu_mfg_1} we see that\n \\[ u_{\\lambda,Z_2} = u \\leq u_{\\lambda,Z_1} \\, . \\]\n\\end{proof}\n\n\\begin{proposition} \\label{prop:xu_mfg_cont}\n Define $\\Phi: (\\Lambda_1, \\Lambda_2) \\times (0,\\infty) \\to L^{\\infty}(\\Omega)$ by\n \\[ \\Phi(\\lambda,Z) = u_{\\lambda,Z} \\, , \\]\n where $u_{\\lambda,Z}$ is the unique solution to~\\eqref{eq:xu_mfg_bvp1}--\\eqref{eq:xu_mfg_bvp3} as found in the Proposition~\\ref{prop:xu_mfg_1}. Assume $h$ is strictly increasing with respect to $m$. Then $\\Phi$ is continuous (with respect to $L^{\\infty}$ norm).\n\\end{proposition}\n\\begin{proof}\n\n We will prove $\\Phi$ is sequentially continuous. Let $(\\lambda_n,Z_n)$ be a sequence in $(\\Lambda_1,\\Lambda_2) \\times (0,\\infty)$ that converges to $(\\lambda,Z) \\in (\\Lambda_1,\\Lambda_2) \\times (0,\\infty)$. We consider two sequences: $(\\lambda_{n}^{(i)}, Z_{n}^{(i)})$ for $i = 1,2$, which we use to sandwich our original sequence. We set these sequences with the following conditions \n \\begin{enumerate}\n \\item $\\lambda_n^{(1)} = \\inf_{j \\geq n} \\lambda_j$\n \\item $\\lambda_n^{(2)} = \\sup_{j \\geq n} \\lambda_j$\n \\item $Z_n^{(1)} = \\inf_{j \\geq n} Z_j$\n \\item $Z_n^{(2)} = \\sup_{j \\geq n} Z_j$\n \\end{enumerate}\n In the first part of this proof we show that for each $i = 1,2$, there exists a subsequence $(\\lambda_{n_k}^{(i)}, Z_{n_k}^{(i)})$ such that $u_{\\lambda_{n_k}^{(i)}, Z_{n_k}^{(i)}} \\to u_{\\lambda, Z}$. We will only show this for $i = 1$ as the case $i = 2$ is identical. Clearly the sequence $(\\lambda_{n}^{(1)}, Z_{n}^{(1)})$ also converges to $(\\lambda,Z)$. So there exists a subsequence $n_k$ such that $u_{\\lambda_{n_k}^{(1)},Z_{n_k}^{(1)}} \\to u_*$ in $C^2(\\Omega) \\cap C^1(\\bar{\\Omega})$ because $u_{\\lambda_{n_k}^{(1)},Z_{n_k}^{(1)}} \\in C^{2,\\tau}(\\bar{\\Omega})$ (by Proposition~\\ref{prop:xu_mfg_1}), which is compactly embedded in $C^2(\\bar{\\Omega}) \\subset C^2(\\Omega) \\cap C^1(\\bar{\\Omega})$. Therefore we also get the following pointwise convergence\n \\begin{align*}\n 0 & = \\lim_{k \\to \\infty} \\left[-\\frac{\\sigma^2}{2} \\nabla^2 u_{\\lambda_{n_k},Z_{n_k}} + \\frac{1}{2}\\left| \\nabla u_{\\lambda_{n_k},Z_{n_k}}\\right|^2 - h \\left( x, \\frac{1}{Z_{n_k}} e^{- \\frac{2}{\\sigma^2} u_{\\lambda_{n_k},Z_{n_k}}} \\right) + \\lambda_{n_k} \\right]\\\\\n & = - \\frac{\\sigma^2}{2} \\nabla^2 u_* + \\frac{\\left| \\nabla u_* \\right|^2}{2} - h \\left( x, \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} u_*} \\right) + \\lambda\\\\\n 0 &= \\lim_{k \\to \\infty} \\nabla u_{\\lambda_{n_k},Z_{n_k}} \\cdot \\nu |_{x \\in \\partial \\Omega} = \\nabla u_* \\cdot \\nu |_{x \\in \\partial \\Omega} \\, .\n \\end{align*}\n So $u_* = u_{\\lambda,Z}$, by uniqueness proved in Proposition~\\ref{prop:xu_mfg_1}. Now by design we have $\\lambda_{n_k}^{(2)} \\geq \\lambda_n \\geq \\lambda_{n_k}^{(1)}$ for all $n \\geq n_k$ and similarly for $Z_n$, hence $u_{\\lambda_{n_k}^{(1)},Z_{n_k}^{(1)}} \\geq u_{\\lambda_n,Z_n} \\geq u_{\\lambda_{n_k}^{(2)},Z_{n_k}^{(2)}}$ by Lemma~\\ref{lm:xu_mfg_monotone}. So $u_{\\lambda_n,Z_n} \\to u_{\\lambda,Z}$ in $L^{\\infty}(\\Omega)$.\n\\end{proof}\n\n\\begin{proposition} \\label{prop:xu_mfg_lambda}\n For each $Z \\in (0,\\infty)$ define $I_1(\\cdot;Z): (\\Lambda_1,\\Lambda_2) \\to \\R$ by\n \\[ I_1(\\lambda;Z) = \\int_{\\Omega} u_{\\lambda,Z}~dx = \\int_{\\Omega} \\Phi(\\lambda,Z)~dx \\, . \\]\n Assume $h$ is strictly increasing with respect to $m$. Then for every $Z \\in (0,\\infty)$ there exists a unique $\\lambda = \\lambda(Z)$ such that $I_1(\\lambda(Z);Z) = 0$, furthermore \n \\begin{equation} \\label{eq:xu_mfg_lambdabound}\n \\inf_{x \\in \\Omega} h \\left( x,\\frac{1}{Z} \\right) \\leq \\lambda(Z) \\leq \\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{Z} \\right) \\, .\n \\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n We use the intermediate value theorem to prove this proposition. There are three parts we have to prove\n \\begin{enumerate}\n \\item For every $Z \\in (0,\\infty)$ there exists $\\lambda_1 \\leq \\lambda_2 \\in (\\Lambda_1, \\Lambda_2)$ such that $I_1(\\lambda_1;Z) \\leq 0$ and $I_1(\\lambda_2;Z) \\geq 0$.\n \\item $I_1(\\lambda;Z)$ is continuous with respect to $\\lambda$ in $[\\lambda_1,\\lambda_2]$.\n \\item $I_1(\\lambda;Z)$ is strictly decreasing with respect to $\\lambda$.\n \\end{enumerate}\n \n Part (1) and part (2) allow us to use the intermediate value theorem to show that for every $Z \\in (0,\\infty)$ there exists some $\\lambda$ such that $I_1(\\lambda;Z) = 0$. Part (3) shows that this $\\lambda$ is unique, so the function $Z \\mapsto \\lambda(Z)$ is well defined.\n \n Part (1): Take $\\lambda_1 = \\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{Z} \\right) > \\Lambda_1$. Then recall that $\\bar{u}_{\\lambda_1, Z} = \\max \\left( - \\frac{\\sigma^2}{2} \\log(Z M_{\\lambda_1}^1), 0 \\right)$, where $M_{\\lambda_1}^1$ satisfies $h \\left( x,M_{\\lambda_1}^1 \\right) \\leq \\lambda_1$. But we can take $M_{\\lambda_1}^1 = \\frac{1}{Z}$ by our choice of $\\lambda_1$. So $u_{\\lambda_1,Z} \\leq \\bar{u}_{\\lambda_1, Z} = 0$, and therefore $I_1(\\lambda_1;Z) \\leq 0$. The choice for $\\lambda_2$ is $\\lambda_2 = \\inf_{x \\in \\Omega} h \\left( x,\\frac{1}{Z} \\right)$ and the proof is similar to the above. \n \n Part (2): Take $\\lambda_1,\\lambda_2$ as above. By Propositions~\\ref{prop:xu_mfg_1} and~\\ref{prop:xu_mfg_cont} and Lemma~\\ref{lm:xu_mfg_monotone}, $u_{\\lambda,Z}$ is continuous with respect to $\\lambda$ in $L^{\\infty}(\\Omega)$ and $\\underaccent{\\bar}{u}_{\\lambda_2,Z} \\leq u_{\\lambda,Z} \\leq \\bar{u}_{\\lambda_1,Z}$ for any $\\lambda \\in [\\lambda_1,\\lambda_2]$. So by the dominated convergence theorem $I_1$ is continuous in $\\lambda$.\n \n Part (3): Take $\\lambda_1 < \\lambda_2$, from Lemma~\\ref{lm:xu_mfg_monotone} we know $u_{\\lambda_1,Z} \\geq u_{\\lambda_2,Z}$. Clearly, since solutions to the PDE~\\eqref{eq:xu_mfg_bvp1}--\\eqref{eq:xu_mfg_bvp3} are unique, there exists $a \\in \\Omega$ such that $u_{\\lambda_1,Z}(a) \\neq u_{\\lambda_2,Z}(a)$. Hence, $u_{\\lambda_1,Z}(a) > u_{\\lambda_2,Z}(a)$ and so by continuity $I_1(\\lambda_1,Z) > I_1(\\lambda_2,Z)$.\n\\end{proof}\n \n\\begin{remark}\n This proposition ensures that for any $Z \\in (0,\\infty)$ we can find $\\lambda = \\lambda(Z)$ and $u = u_{\\lambda(Z),Z}$ satisfying~\\eqref{eq:xu_mfg_bvp1}--\\eqref{eq:xu_mfg_bvp3},~\\eqref{eq:xu_mfg_bvp5}, so we are left to find $Z^*$ such that~\\eqref{eq:xu_mfg_bvp4} holds.\n\\end{remark}\n\n\\begin{lemma} \\label{lm:xu_lambda_monotone}\n Assume $h$ is strictly increasing with respect to $m$. Then the function $\\lambda(Z)$ is strictly decreasing.\n\\end{lemma}\n \n\\begin{proof}\n From Lemma~\\ref{lm:xu_mfg_monotone}, $u_{\\lambda,Z}$ is strictly decreasing with respect to $Z$. Now suppose $Z_1 < Z_2$ then\n \\[ 0 = I_1(\\lambda(Z_2),Z_2) = I_1(\\lambda(Z_1),Z_1) > I_1(\\lambda(Z_1),Z_2) \\, . \\]\n Therefore, since $I_1$ is strictly decreasing in $\\lambda$, $\\lambda(Z_2) < \\lambda(Z_1)$ so $\\lambda(Z)$ is strictly decreasing with respect to $Z$.\n\\end{proof}\n \n\\begin{lemma}\n Assume $h$ is strictly increasing with respect to $m$. Then the function $\\lambda(Z)$ is continuous.\n\\end{lemma}\n \n\\begin{proof}\n We will prove $\\lambda(Z)$ is sequentially continuous. Let $Z_n$ be a sequence in $(0,\\infty)$ that converges to $Z \\in (0,\\infty)$. We consider two sequences: $Z_{n}^{(i)}$ for $i = 1,2$, which we use to sandwich our original sequence. We choose these sequences as follows\n \\begin{enumerate}\n \\item $Z_n^{(1)} = \\inf_{j \\geq n} Z_j$\n \\item $Z_n^{(2)} = \\sup_{j \\geq n} Z_j$\n \\end{enumerate}\n \n Now, $Z_n^{(1)},Z_n^{(2)} \\to Z$ and are increasing and decreasing sequences respectively. Furthermore, there exists $\\underaccent{\\bar}{Z},\\bar{Z}$ such that $Z_n^{(1)},Z_n^{(2)} \\in [\\underaccent{\\bar}{Z},\\bar{Z}]$ for every $n \\in \\mathbb{N}$. So, since $\\lambda(Z)$ is decreasing, $\\lambda(Z_n^{(1)}),\\lambda(Z_n^{(2)}) \\in [\\lambda(\\bar{Z}),\\lambda(\\underaccent{\\bar}{Z})]$ and are decreasing and increasing respectively. Therefore, $\\lambda(Z_n^{(1)}) \\to \\lambda^{(1)}$ and $\\lambda(Z_n^{(2)}) \\to \\lambda^{(2)}$. Using continuity of $I_1$ we get for $i = 1,2$:\n \\[ 0 = \\lim_{n \\to \\infty} I_1 \\left( \\lambda(Z_n^{(i)}),Z_n^{(i)} \\right) = I_1 \\left( \\lambda^{(i)},Z \\right) \\, . \\]\n Hence by definition $\\lambda^{(1)} = \\lambda^{(2)} = \\lambda(Z)$. Since $Z_n^{(i)}$ bound $Z_n$, then $\\lambda(Z_n^{(i)})$ bound $\\lambda(Z_n)$. Hence $\\lambda(Z_n) \\to \\lambda(Z)$.\n\\end{proof}\n\n\\begin{proposition} \\label{prop:xu_mfg_Z}\n Define $I_2:(0,\\infty) \\to (0,\\infty)$ by\n \\begin{equation}\n I_2(Z) = \\int_{\\Omega} \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} u_{\\lambda(Z),Z}}~dx \\, .\n \\end{equation}\n Assume $h$ is strictly increasing with respect to $m$. Then there exists a unique $Z^* \\in (0,\\infty)$ such that $I_2(Z^*) = 1$.\n\\end{proposition}\n \n\\begin{proof}\n Similar to the proof of Proposition~\\ref{prop:xu_mfg_lambda}, we prove this proposition using the intermediate value theorem. Again there are three parts we have to prove\n \\begin{enumerate}\n \\item There exists $Z_1 \\leq Z_2 \\in (0, \\infty)$ such that $I_2(Z_1) \\geq 1$ and $I_2(Z_2) \\leq 1$.\n \\item $I_2(Z)$ is continuous with respect to $Z$ for all $Z \\in [Z_1,Z_2]$.\n \\item $I_2(Z)$ is strictly decreasing with respect to $Z$.\n \\end{enumerate}\n Steps (1) and (2) prove existence via the intermediate value theorem, step (3) proves uniqueness.\n \n Step (1): From assumption~\\ref{a:mfg_mto0}, we can find $Z_2 \\geq |\\Omega|$ such that \n \\begin{equation} \\label{eq:xu_mfg_Zbound}\n \\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{Z_2} \\right) \\leq \\inf_{x \\in \\Omega} h \\left(x, \\frac{1}{|\\Omega|} \\right) \\, .\n \\end{equation}\n Then, since $\\lambda(Z_2) \\leq \\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{Z_2} \\right)$ (from~\\eqref{eq:xu_mfg_lambdabound}), it follows that\n \\[ u_{\\lambda(Z_2),Z_2} \\geq u_{\\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{Z_2} \\right), Z_2} \\geq \\underaccent{\\bar}{u}_{\\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{Z_2} \\right), Z_2} = \\min \\left( - \\frac{\\sigma^2}{2} \\log Z_2 M,0 \\right) \\, , \\]\n where $M$ satisfies $h(x,M) \\geq \\sup_{x \\in \\Omega} h \\left( x,\\frac{1}{Z_2} \\right)$ for all $x$ (from the proof of Lemma~\\ref{lm:xu_mfg_constbound}). But from~\\eqref{eq:xu_mfg_Zbound}, this is clearly satisfied by $M = \\frac{1}{|\\Omega|}$, and in this case $\\min \\left( - \\frac{\\sigma^2}{2} \\log Z_2 M,0 \\right) = - \\frac{\\sigma^2}{2} \\log \\frac{Z_2}{|\\Omega|}$. Thus\n \\[ I_2(Z_2) \\leq \\int_{\\Omega} \\frac{1}{Z_2} e^{- \\frac{2}{\\sigma^2} \\left( - \\frac{\\sigma^2}{2} \\log \\frac{Z_2}{|\\Omega|} \\right)}~dx = \\int_{\\Omega} \\frac{1}{|\\Omega|}~dx = 1 \\, . \\]\n A similar procedure works to find $Z_1$, in which case $Z_1$ satisfies $Z_1 \\leq |\\Omega|$ and $\\inf_{x \\in \\Omega} h \\left( x, \\frac{1}{Z_1} \\right) \\geq \\sup_{x \\in \\Omega} h \\left( x, \\frac{1}{|\\Omega|} \\right)$.\n \n Step (2): Take $Z_1 \\leq Z_2$ as in Step (1). Then for every $Z \\in [Z_1,Z_2]$ there exists $C_1,C_2 \\in \\R$ such that ( by~\\eqref{eq:xu_mfg_lambdabound})\n \\[ C_2 = \\inf_{x \\in \\Omega} h \\left(x, \\frac{1}{Z_2} \\right) \\leq \\lambda(Z_2) \\leq \\lambda(Z) \\leq \\lambda(Z_1) \\leq \\sup_{x \\in \\Omega} h \\left(x, \\frac{1}{Z_1} \\right) = C_1 \\, . \\]\n So $\\underaccent{\\bar}{u}_{C_1,Z_2} \\leq u_{\\lambda(Z),Z} \\leq \\bar{u}_{C_2,Z_1}$ for every $Z \\in [Z_1,Z_2]$. So we can use the dominated convergence theorem along with continuity of $u_{\\lambda,Z}$ with respect $(\\lambda,Z)$ and continuity of $\\lambda(Z)$ with respect to $Z$ to show $I_2(Z)$ is continuous.\n \n Step (3): Take $\\underaccent{\\bar}{Z} < \\bar{Z}$ then there exists $a \\in \\Omega$ such that\n \\[ u_{\\lambda(\\underaccent{\\bar}{Z}),\\bar{Z}}(a) < u_{\\lambda(\\bar{Z}),\\bar{Z}}(a) \\, . \\]\n Therefore, at $a \\in \\Omega$:\n \\[ \\frac{1}{\\underaccent{\\bar}{Z}} e^{- \\frac{2}{\\sigma^2} u_{\\lambda(\\underaccent{\\bar}{Z}),\\underaccent{\\bar}{Z}}} = \\frac{1}{\\bar{Z}} e^{- \\frac{2}{\\sigma^2} u_{\\lambda(\\underaccent{\\bar}{Z}),\\bar{Z}}} > \\frac{1}{\\bar{Z}} e^{- \\frac{2}{\\sigma^2} u_{\\lambda(\\bar{Z}),\\bar{Z}}} \\, . \\]\n So $I_2(\\underaccent{\\bar}{Z}) > I_2(\\bar{Z})$ because of the continuity of $u_{\\lambda,Z}$. This proves $I_2$ is strictly decreasing.\n\\end{proof}\n \n\\begin{proof}[End of proof of Theorem~\\ref{thm:xu_mfg1}]\n First let's assume $h$ is a strictly increasing function in $m$. Then we can choose the unique $Z^* \\in (0,\\infty)$ such that $I_2(Z^*) = 1$. Then clearly the triple ${\\left( u_{\\lambda(Z^*),Z^*}, \\lambda(Z^*), Z^* \\right)}$ is a solution to the system~\\eqref{eq:xu_mfg_sys}. Furthermore, suppose $(u',\\lambda',Z')$ is also a solution of~\\eqref{eq:xu_mfg_sys}. But this implies that $u'$ satisfies~\\eqref{eq:xu_mfg_bvp1}--\\eqref{eq:xu_mfg_bvp3}, so $u' = u_{\\lambda',Z'}$ from uniqueness proven in Proposition~\\ref{prop:xu_mfg_1}. Then $u_{\\lambda',Z'}$ also solves~\\eqref{eq:xu_mfg_bvp5}, so by uniqueness proven in Proposition~\\ref{prop:xu_mfg_lambda} we can show $\\lambda' = \\lambda(Z')$. Finally we now have $u' = u_{\\lambda(Z'),Z'}$ meets the integral constraint~\\eqref{eq:xu_mfg_bvp4}. So from uniqueness proven in Proposition~\\ref{prop:xu_mfg_Z} we have $Z' = Z^*$. Therefore ${(u',\\lambda',Z') = \\left( u_{\\lambda(Z^*),Z^*}, \\lambda(Z^*), Z^* \\right)}$. Hence the unique solution to the MFG problem is given by $\\left( m_{Z^*},u_{\\lambda(Z^*),Z^*},\\lambda(Z^*) \\right)$, where $m_{Z^*}$ is defined by $m_{Z^*} = \\frac{1}{Z^*} e^{- \\frac{2}{\\sigma^2}u_{z^*}}$.\n \n Now let's assume $h$ is an increasing function in $m$ and define $h_{\\epsilon}(x,m)$ by\n \\[ h_{\\epsilon}(x,m) = h(x,m) + \\epsilon \\log \\left(|\\Omega| m \\right) \\, . \\]\n Then for every $\\epsilon \\in (0,1]$, $h_{\\epsilon}$ is a strictly increasing function of $m$. Furthermore $h_{\\epsilon}$ still satisfies assumptions~\\ref{a:mfg_hreg}--\\ref{a:mfg_mtoinf}. Therefore there exists a unique solution $(u_{\\epsilon},\\lambda_{\\epsilon},Z_{\\epsilon})$ to the MFG system~\\eqref{eq:xu_mfg_sys}. From Proposition~\\ref{prop:xu_mfg_Z}, $Z_{\\epsilon} \\in [Z^1_{\\epsilon},Z^2_{\\epsilon}]$ for some $Z^1_{\\epsilon},Z^2_{\\epsilon} \\in (0,\\infty)$ such that\n \\[ \\begin{aligned}\n 0 < Z^1_{\\epsilon} \\leq &|\\Omega| \\leq Z^2_{\\epsilon} < \\infty \\\\\n \\sup_{x \\in \\Omega} h_{\\epsilon} \\left(x,\\frac{1}{Z^2_{\\epsilon}}\\right) &\\leq \\inf_{x \\in \\Omega} h_{\\epsilon} \\left(x,\\frac{1}{|\\Omega|}\\right) \\\\\n \\inf_{x \\in \\Omega} h_{\\epsilon} \\left(x,\\frac{1}{Z^1_{\\epsilon}}\\right) &\\geq \\sup_{x \\in \\Omega} h_{\\epsilon} \\left(x,\\frac{1}{|\\Omega|}\\right) \\, .\n \\end{aligned} \\]\n But by the definition of $h_{\\epsilon}$ we have $h_{\\epsilon} \\left(x,\\frac{1}{Z^2_{\\epsilon}}\\right) \\leq h \\left(x,\\frac{1}{Z^2_{\\epsilon}}\\right)$ and $h_{\\epsilon} \\left(x,\\frac{1}{|\\Omega|}\\right) = h \\left(x,\\frac{1}{|\\Omega|}\\right)$, and a similar inequality holds for $Z^1_{\\epsilon}$ . So we can find $Z^1 \\in (0,|\\Omega|]$ and $Z^2 \\in [|\\Omega|,\\infty)$ independent of $\\epsilon$ such that $Z_{\\epsilon} \\in [Z^1,Z^2]$ for every $\\epsilon \\in (0,1]$. Now, from Lemma~\\ref{lm:xu_lambda_monotone} \n \\[ \\lambda_{\\epsilon} = \\lambda(Z_{\\epsilon}) \\in \\left[\\lambda(Z^2),\\lambda(Z^1)\\right] \\, . \\]\n So take a sequence $\\epsilon_n$ such that $\\lim_{n \\to \\infty} \\epsilon_n = 0$. Then, since $u_{\\epsilon} \\in C^{2,\\tau}\\left(\\bar{\\Omega}\\right)$, which is compactly embedded in $C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$, there exists a subsequence also denoted by $n$ such that $u_{\\epsilon_n} \\to u_0$ with convergence in $C^2\\left(\\Omega\\right) \\cap C^1\\left(\\bar{\\Omega}\\right)$, $Z_{\\epsilon_n} \\to Z_0 \\in [Z^1,Z^2]$, and $\\lambda_{\\epsilon_n} \\to \\lambda_0 \\in \\left[\\lambda(Z^2),\\lambda(Z^1)\\right]$. So we find, by taking limits\n \\[ - \\frac{\\sigma^2}{2} \\nabla^2 u_0 + \\frac{|\\nabla u_0|^2}{2} - h\\left(x,\\frac{1}{Z_0}e^{- \\frac{2}{\\sigma^2}u_0}\\right) + \\lambda_0 = 0 \\, .\\]\n Similarly we can show $(u_0,\\lambda_0,Z_0)$ satisfy~\\eqref{eq:xu_mfg_sys}. So we have proven existence of solutions for increasing $h$. \n \n For uniqueness, let's assume $(u_1,\\lambda_1,Z_1)$ and $(u_2,\\lambda_2,Z_2)$ are both solutions of~\\eqref{eq:xu_mfg_sys} and $\\lambda_1 \\leq \\lambda_2$. Define $u = u_2 - \\frac{\\sigma^2}{2} \\log\\left(\\frac{Z_1}{Z_2}\\right)$, then $u$ satisfies\n \\[ - \\frac{\\sigma^2}{2} \\nabla^2 u + \\frac{|\\nabla u|^2}{2} - h\\left(x,\\frac{1}{Z_1}e^{- \\frac{2}{\\sigma^2}u}\\right) + \\lambda_1 = 0 \\, . \\] \n Now define $v = u - u_2$ and suppose there exists $x \\in \\bar{\\Omega}$ such that $v(x) > 0$. Then $v$ has a maximum at $x^*$ and $v(x^*) > 0$. By Hopf's lemma $x^* \\in \\Omega$ and by the maximum principle $v$ is constant in the set $\\Omega_+ = \\{x \\in \\Omega : v(x) \\geq 0\\}$ (see the proof of Proposition~\\ref{prop:xu_mfg_1} for the details of such an argument). By assumption $x^* \\in \\Omega_+$ and $v(x^*) > 0$, so $\\Omega_+ = \\Omega$ by continuity of $v$. Hence $v$ is constant in $\\Omega$ and $v > 0$. However, from the integral constraint~\\eqref{eq:xu_mfg_bvp4} we obtain\n \\begin{equation} \\label{eq:xu_mfg_proof1}\n 0 = \\int_{\\Omega} \\frac{1}{Z_1} e^{- \\frac{2}{\\sigma^2} u_1} \\left(1 - e^{- \\frac{2}{\\sigma^2} v}\\right)~dx = |\\Omega| \\left(1 - e^{- \\frac{2}{\\sigma^2} v}\\right) \\, ,\n \\end{equation}\n since $1 - e^{- \\frac{2}{\\sigma^2} v}$ is constant. So $v = 0$, contradicting the assumption $v(x^*) > 0$. Therefore $v(x) \\leq 0$ for all $x \\in \\Omega$. This implies that $1 - e^{- \\frac{2}{\\sigma^2} v} \\leq 0$, and subsequently that\n \\[0 = \\int_{\\Omega} \\frac{1}{Z_1} e^{- \\frac{2}{\\sigma^2} u_1} \\left(1 - e^{- \\frac{2}{\\sigma^2} v}\\right)~dx \\leq 0 \\, ,\\]\n with equality if and only if $v = 0$. Therefore $u_2 = u$, which implies (using the integral constraint~\\eqref{eq:xu_mfg_bvp5}) that $Z_1 = Z_2$, and subsequently that $u_1 = u_2$. Finally, by subtracting the PDE~\\eqref{eq:xu_mfg_bvp2} satisfied by $u_1$ from the one satisfied by $u_2$ we find $\\lambda_2 = \\lambda_1$. Therefore solutions are unique.\n\\end{proof}\n\n\\section{Quadratic Potential} \\label{sec:quad potential}\n\n\\noindent In this section we consider a specific example with quadratic potential and a logarithmic congestion term, $h(x,m) = \\beta x^2 + \\log m$ for some constant $\\beta \\geq 0$ on the real line. This problem has been studied extensively in~\\cite{Gomes2016a} and~\\cite{Gueant2009} and admits explicit solutions. This allows us to compare the solutions of the BRS and the MFG. Note that we do not impose any boundary conditions or integral constraints on $u$, since we consider the model on $\\R$ rather than on a bounded domain. Therefore it doesn't fit directly into the framework for existence and uniqueness proven in the previous section. It is however, one of the few illustrative examples, where explicit solutions are known. This allows us to make an analytical comparison of the two models and use the solution to validate the proposed numerical methods.\n\n\\subsection{The MFG}\n\nThe stationary MFG model studied in~\\cite{Gomes2016a} and~\\cite{Gueant2009}, with the integral constraints used in this paper, is given by:\n\\begin{subequations}\\label{eq:quad_stat_mfg1}\n\\begin{align} \n \\frac{\\sigma^2}{2} \\partial_{xx}^2 m + \\partial_x \\left( m \\partial_x u \\right) &= 0 \\, , \\quad x \\in \\R \\, , \\\\ \\label{eq:quad_stat_mfg2}\n - \\frac{|\\partial_x u|^2}{2} + \\log m + \\beta x^2 + \\frac{\\sigma^2}{2} \\partial_{xx}^2 u + \\lambda &= 0 \\, , \\quad x \\in \\R \\, , \\\\\n \\int_{\\R} m~dx &= 1 \\, .\n\\end{align}\n\\end{subequations}\nwhere $\\lambda \\in (-\\infty, \\infty)$ is a constant to be found as part of the solution, and $\\sigma, \\beta \\geq 0$ are given parameters.\n\\begin{proposition}\n A solution to the stationary MFG system~\\eqref{eq:quad_stat_mfg1} exists and has an explicit form \n \\begin{subequations} \\label{eq:quad_stat_mfg_sol}\n \\begin{align}\n & m(x) = \\left( \\frac{a}{\\pi} \\right)^{1\/2} e^{- a x^2} \\\\\n & u(x) = b x^2 \\\\\n & \\lambda = \\log \\left( \\frac{\\pi}{a} \\right) - \\sigma^2 b\\, ,\n \\end{align}\n \\end{subequations}\n where the constants $a,b,c \\geq 0$ are given by\n \\[ a = \\beta,~ b = 0 \\, , \\]\n if $\\sigma = 0$, or\n \\[ a = \\frac{-1 + \\left( 1 + 2 \\sigma^4 \\beta \\right)^{1\/2}}{\\sigma^4},~b = \\frac{-1 + \\left( 1 + 2 \\sigma^4 \\beta \\right)^{1\/2}}{2 \\sigma^2} \\, , \\]\n if $\\sigma > 0$.\n\\end{proposition}\nThe proof is straight-forward using substitution.\n\n\\subsection{The BRS}\nNext we consider the respective stationary BRS model. It is given by\n\\begin{subequations} \\label{eq:quad_stat_brs}\n \\begin{align}\n \\partial_x \\left( m \\partial_x (\\log m + \\beta x^2) \\right) + \\frac{\\sigma^2}{2} \\partial_{xx}^2 m &= 0 \\, , \\quad x \\in \\R \\\\\n \\int_{\\R} m~dx &= 1 \\, .\n \\end{align}\n\\end{subequations}\n\\begin{proposition}\n The solution to the stationary BRS equation~\\eqref{eq:quad_stat_brs} is given by\n \\[ m(x) = \\left( \\frac{2 \\beta}{(2 + \\sigma^2) \\pi} \\right)^{1\/2} e^{- \\frac{2 \\beta}{(2 + \\sigma^2)} x^2} \\, . \\]\n\\end{proposition}\nAgain the claim follows from substitution. \n\n\\begin{figure}[h!]\n \\begin{subfigure}{0.49 \\textwidth}\n\t \\centering\n\t \\includegraphics[width=\\linewidth]{brs-mfg-00.jpg}\n\t \\caption{$\\beta = 0.1$, $\\frac{\\sigma^2}{2} = 10$}\n \\end{subfigure}\n \\begin{subfigure}{0.5 \\textwidth}\n\t \\centering\n\t \\includegraphics[width=\\linewidth]{brs-mfg-01.jpg}\n\t \\caption{$\\beta = 0.1$, $\\frac{\\sigma^2}{2} = 0.2$}\n \\end{subfigure}\n \n \\bigskip\n \n \\begin{subfigure}{0.49 \\textwidth}\n\t \\centering\n\t \\includegraphics[width=\\linewidth]{brs-mfg-02.jpg}\n\t \\caption{$\\beta = 1$, $\\frac{\\sigma^2}{2} = 10$}\n \\end{subfigure}\n \\begin{subfigure}{0.49 \\textwidth}\n\t \\centering\n\t \\includegraphics[width=\\linewidth]{brs-mfg-03.jpg}\n\t \\caption{$\\beta = 1$, $\\frac{\\sigma^2}{2} = 0.2$}\n \\end{subfigure}\n \n \\bigskip\n \n \\begin{subfigure}{0.49 \\textwidth}\n\t \\centering\n\t \\includegraphics[width=\\linewidth]{brs-mfg-04.jpg}\n\t \\caption{$\\beta = 10$, $\\frac{\\sigma^2}{2} = 10$}\n \\end{subfigure}\n \\begin{subfigure}{0.49 \\textwidth}\n\t \\centering\n\t \\includegraphics[width=\\linewidth]{brs-mfg-05.jpg}\n\t \\caption{$\\beta = 10$, $\\frac{\\sigma^2}{2} = 0.2$}\n \\end{subfigure}\n \\caption{Simulations of BRS and MFG with quadratic potential and logarithmic congestion}\n \\label{fig:quad-log}\n\\end{figure}\n\n\\subsection{Comparison}\n\\begin{proposition}\n For $\\sigma > 0$, the stationary distributions of the MFG system~\\eqref{eq:quad_stat_mfg1} and the BRS~\\eqref{eq:quad_stat_brs} are given by normal distributions with mean $0$ and variances $a_1$ and $a_2$ respectively, where\n \\[ \\begin{aligned}\n a_1 & = \\frac{\\sigma^4}{-2 + 2(1 + 2 \\sigma^4 \\beta)^{1\/2}} \\\\\n a_2 & = \\frac{2 + \\sigma^2}{4 \\beta} \\, .\n \\end{aligned} \\]\n Then for fixed $\\beta \\geq 0$\n \\begin{subequations}\n \\begin{align}\n \\lim_{\\sigma^2 \\to 0} \\frac{a_2}{a_1} &= 1 \\label{eq:quad_lim1} \\\\\n \\lim_{\\sigma^2 \\to \\infty} \\frac{a_2}{a_1} &= \\frac{1}{(2 \\beta)^{1\/2}} \\, . \\label{eq:quad_lim2}\n \\end{align}\n \\end{subequations}\n While for fixed $\\sigma > 0$\n \\begin{subequations}\n \\begin{align}\n \\lim_{\\beta \\to 0} \\frac{a_2}{a_1} &= 1 + \\frac{\\sigma^2}{2} \\label{eq:quad_lim3} \\\\\n \\lim_{\\beta \\to \\infty} (2 \\beta)^{1\/2} \\frac{a_2}{a_1} &= \\frac{2 + \\sigma^2}{\\sigma^2} \\, . \\label{eq:quad_lim4}\n \\end{align}\n \\end{subequations}\n\\end{proposition}\n\n\\begin{proof}\n The first part of this proof is trivial from the previous propositions. Now \n \\[ \\frac{a_2}{a_1} = \\frac{(2 + \\sigma^2) \\left( (1 + 2 \\sigma^4 \\beta)^{1\/2} - 1 \\right)}{2 \\sigma^4 \\beta} \\, . \\]\n Using a Taylor expansion of $(1 + x)^{1\/2}$ around $x = 0$ gives behaviour for small $\\sigma^2$ i.e.\n \\[ \\frac{a_2}{a_1} = \\frac{(2 + \\sigma^2) (\\sigma^4 \\beta + o(\\sigma^4))}{2 \\sigma^4 \\beta} \\, . \\]\n Hence $\\lim_{\\sigma^2 \\to 0} \\frac{a_2}{a_1} = \\lim_{\\sigma^2 \\to 0} \\frac{2 + \\sigma^2}{2} = 1$. The other limit can be simply calculated\n \\[ \\begin{aligned}\n \\lim_{\\sigma^2 \\to \\infty} \\frac{a_2}{a_1} & = \\lim_{\\sigma^2 \\to \\infty} \\frac{(2 + \\sigma^2) (1 + 2 \\sigma^4 \\beta)^{1\/2}}{2 \\sigma^4 \\beta} = \\lim_{\\sigma^2 \\to \\infty} \\frac{(2 + \\sigma^2) (2 \\beta)^{1\/2} \\sigma^2}{2 \\sigma^4 \\beta} = \\frac{1}{(2 \\beta)^{1\/2}} \\, .\n \\end{aligned} \\]\n The limits as $\\beta \\to 0,\\infty$ for fixed $\\sigma$, follows from straight forward calculations.\n\\end{proof}\n\nThis result is an important first glimpse at how the behaviour of the BRS and MFG may vary, as well as the importance certain parameters play in the difference. The limit in~\\eqref{eq:quad_lim1} shows that the existence of noise is vital to see any difference between the two models. However, as soon as there is noise, its effect on the relative difference plays a less important role than the strength of the quadratic potential, this can be seen in~\\eqref{eq:quad_lim2} and~\\eqref{eq:quad_lim4}. Specifically the limit~\\eqref{eq:quad_lim4} shows that the relative difference between the variances of the two distributions grows like $\\beta^{\\frac{1}{2}}$, which means the BRS distribution reacts much more rapidly with changes to the potential strength than the MFG. This suggests that the MFG is more affected by congestion or is a more congestion-averse model than the BRS one. \n\nAt a conceptual level this agrees with the formulation of the MFG and BRS systems. The agents in the BRS are acting myopically, only reacting to the situation as it currently exists, which isn't the case in the MFG. As a result the BRS agents don't `see' the future congestion that will result from their behaviour and hence they move towards the minimum of $\\beta x^2$ more rapidly than the MFG agents who do see the future cost of the congestion that results from their behaviour. Therefore, thinking of the stationary solutions as the long time, time-averaged behaviour of the models then the stationary BRS will result in a distribution that appears to take into account the congestion less than the MFG and hence one with a smaller variance. This expectation is confirmed by the result~\\eqref{eq:quad_lim4} and it in fact quantifies the extent to which the BRS ignores the congestion compared with the MFG.\n\nFor this model we have run a variety of simulations --- both to confirm our numerical methods (see Section~\\ref{sec:numerical_sim} for methods) and to visualise how the parameters affect the distributions. Figure~\\ref{fig:quad-log} shows the results of these simulations on a bounded domain for a variety of parameter choices. Although the formulation on a bounded domain is slightly different than the one in this section, the same behaviour can be seen. For small $\\sigma$ the difference between the models doesn't change much as $\\beta$ increases, while for large values of $\\sigma$ the BRS model is much more dramatically affected by changes to $\\beta$. In both cases the BRS and MFG are more closely aligned when $\\beta$ is small. \n\n\\section{Simulations}\\label{sec:numerical_sim}\n\\noindent We conclude with presenting various computational experiments, which illustrate the difference between solutions to the BRS and MFG for different choices running costs and potentials. \n\n\\subsection{Solving the stationary BRS and MFG}\nSolutions to the stationary BRS~\\eqref{eq:brssystem} can be computed by finding the zeros of the function $G_{Z,x}(m)$ at every discrete grid point $x$ on a grid given $Z$. To compute the roots of $G_{Z,x}$ at every grid point $x$ we use a Newton-Raphson method. Then, having found $m_Z(x)$ for a particular value of $Z$, we can differentiate the implicit formula $m_Z = \\frac{1}{z} e^{- \\frac{2}{\\sigma^2} h(x,m_Z)}$ with respect to $Z$ and use a Newton-Raphson method to find $Z$ such that $\\Phi(Z) = \\int_{\\Omega}m_Z~dx = 1$. In practice this means iterating between the two Newton-Raphson methods: first finding $m_{Z_n}$, then computing $Z_{n+1}$, then recomputing $m_{Z_{n+1}}$ and repeating until convergence.\\\\\nThe solution to the stationary MFG~\\eqref{eq:xu_mfg} are found using an iterative procedure. Given an admissible initial iterate $m^l$, $l=0$ we solve the HJB equation~\\eqref{eq:xu_mfg_pde2} to obtain $u^l$ and $\\lambda^l$. Note that we include the constraint~\\eqref{eq:xu_mfg_ic2} via a Lagrange multiplier. In the final step of the iteration we update the distribution of agents by solving the FPE~\\eqref{eq:xu_mfg_pde1} using $u^{l+1}$ to obtain $m^{l+1}$. This procedure is repeated until convergence. Note that we sometimes perform a damped update\n\\begin{align*}\nu^l = \\omega u^{l-1} + (1-\\omega) v^l \\text{ and } m^l = m^{l-1} + (1-\\omega) q^l \n\\end{align*}\nwhere $v^l,q^l$ are the undamped solutions of each iteration process. This damping helps to ensure convergence. Solutions to the HJB and the FPE are obtained by using an $H^1$ conforming finite element discretisation.\n\n\\begin{figure}[b]\n \\centering\n \\includegraphics[width=0.7\\textwidth]{brs-mfg-06.jpg}\n \\caption{Simulations of BRS and MFG with $h(x,m) = m^{10} + 10 x^2$}\n \\label{fig:quad-power}\n\\end{figure}\n\nWe noticed that in many of the simulations performed the ``cost'' associated to the MFG was higher than the ``cost'' associated with the BRS. At first this sounds counter-intuitive as the BRS (in the dynamic case) is a sub-optimal approximation of the MFG. However, as explained in Section~\\ref{sec:stat_prob}, the ``cost'' function $u$ is actually the long-time average difference between the cost and the space-average cost, whereas the stationary BRS cost is the equilibrium of the competitive minimisation of \\eqref{eq:Estat}. Therefore the MFG cost will always be centred around 0 while the BRS cost could be above or below it. As a result, it was not clear that comparing the ``costs'' of the two models is especially useful and hence we have solely focussed on comparing the distribution of agents.\n\n\\begin{figure}[t]\n \\begin{subfigure}{0.49 \\textwidth}\n\t \\centering\n\t \\includegraphics[width=\\linewidth]{brs-mfg-07.jpg}\n\t \\caption{$m_{\\max} = 10$}\n\t \\label{fig:quad-max-10}\n \\end{subfigure}\n \\begin{subfigure}{0.49 \\textwidth}\n\t \\centering\n\t \\includegraphics[width=\\linewidth]{brs-mfg-08.jpg}\n\t \\caption{$m_{\\max} = 1$}\n\t \\label{fig:quad-max-1}\n \\end{subfigure}\n \\caption{Simulations of BRS and MFG with $h(x,m) = \\frac{1}{m_{\\max} - m} + x^2$}\n \\label{fig:quad-max}\n\\end{figure}\n\n\\subsection{Single well potential}\n\nIn the first examples we investigate the behavior of solutions to both models for cost functionals of the form ${h(x,m) = F(m) + \\beta x^2}$, using different functions $F$ and parameters $\\beta$. We also analyse how the noise level $\\sigma$ affects the two stationary states. Note that the case $F(m) = \\log(m)$ was already discussed in Section~\\ref{sec:quad potential}. We are particularly interested how penalising congestion by considering functions $F(m)$ of the form $F(m) = m^{\\alpha}$ for some $\\alpha > 0$ or $F(m) = \\frac{1}{m_{\\max} - m}$ for some $m_{\\max} > \\frac{1}{\\Omega}$ affect solutions. The last choice introduces a `barrier' above which the density can not exceed. Using such a congestion term is more realistic from a modelling perspective than either the logarithmic or power--law term as it forces densities to stay below a certain physical reasonable limit. We will observe a similar dependence on the parameters $\\beta,\\sigma$ compared with the logarithmic congestion term --- and in fact the same can be said for all of our simulations. So for all values of $\\sigma$ the MFG model responded less to changes in the strength of the potential $\\beta$ compared with the BRS model, however the difference is most pronounced as $\\sigma$ increases and again it may be expected that as $\\sigma \\to 0$ that the two models align very closely.\n\nThe most notable difference between the use of a logarithmic congestion term and a power--law congestion term is a difference in the shape of the distribution, particularly the flatness of the peak of the distribution as shown in figure~\\ref{fig:quad-power}. Importantly this characteristic is shared by both the MFG and the BRS, suggesting the congestion terms $F(m)$ affect both models in similar ways. When looking at congestion terms of the form $F(m) = \\frac{1}{m_{\\max} - m}$, we can find regimes where the behaviour is similar to the logarithm, or more like a vastly exaggerated version of the power--law congestion. When $m_{\\max}$ is large, as in figure~\\ref{fig:quad-max-10}, the resulting distribution for both models looks like a normal distribution, similar to the case with logarithmic congestion. In fact when $m_{max}$ is very large, a formal asymptotic analysis using a Taylor expansion around $m_{\\max}$ can be made which shows \n\\[\\frac{1}{m_{\\max} - m} \\approx \\frac{1}{m_{\\max}} \\, .\\]\nTherefore the BRS satisfies the equation\n\\[ m \\approx \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} \\frac{ \\beta m_{\\max} x^2 - 1}{m_{\\max}}} \\approx \\frac{1}{Z} e^{- \\frac{2 \\beta}{\\sigma^2} x^2} \\, , \\]\nwith a normalisation constant $Z = \\int_{\\Omega} e^{- \\frac{2\\beta}{\\sigma^2} x^2}$, which corresponds, on the whole space $\\R$, to a normal distribution with zero mean and variance $\\frac{\\sigma^2}{4 \\beta}$. The variance of the BRS solution found here differs from the logarithmic congestion case by $\\frac{2}{4\\beta}$. A similar analysis shows that when $m_{\\max} \\to \\infty$ the MFG approximately resembles a normal distribution with zero mean and variance \n$\\frac{\\sigma^2}{2 \\beta}$. In summary, solutions to the MFG and the BRS are both normal distributions with zero mean and with variances whose relative difference is $\\frac{1}{2}$.\n\n\nHowever, when $m_{\\max}$ is reduced, as in figure~\\ref{fig:quad-max-1}, the peak flattens out in a similar but exaggerated way compared to the power--law congestion. It is interesting to note that the BRS seems to respond more to the change in $m_{\\max}$ than the MFG, this is in contrast to the role that $\\sigma$ plays in the two models where the MFG responds more to changes in $\\sigma$ compared with the BRS.\n\n\\begin{figure}[t]\n \\begin{subfigure}{0.49 \\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{mfg-brs-pot-1.jpg}\n \\caption{Double well potential for Figure~\\ref{fig:double well 1}}\n \\label{fig:double pot 1}\n \\end{subfigure}\n \\begin{subfigure}{0.49 \\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{mfg-brs-pot-2.jpg}\n \\caption{Double well potential for Figure~\\ref{fig:double well 2}}\n \\label{fig:double pot 2}\n \\end{subfigure}\n \n \\bigskip\n \n \\begin{subfigure}{0.49 \\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{mfg-brs-pot-3.jpg}\n \\caption{Double well potential for Figure~\\ref{fig:double well 3}}\n \\label{fig:double pot 3}\n \\end{subfigure}\n \\begin{subfigure}{0.49 \\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{mfg-brs-pot-4.jpg}\n \\caption{Double well potential for Figure~\\ref{fig:double well 4}}\n \\label{fig:double pot 4}\n \\end{subfigure}\n \n \\bigskip\n \\centering\n \\begin{subfigure}{0.49 \\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{mfg-brs-pot-5.jpg}\n \\caption{Double well potential for Figure~\\ref{fig:double well 5}}\n \\label{fig:double pot 5}\n \\end{subfigure}\n \\caption{Double well potentials for simulations}\n \\label{fig:double pots}\n\\end{figure}\n\n\\subsection{Double well potential}\n\nThe previous subsection has given insight into how the form of the congestion term affects both models. In this section we explore how the potential term affects each model. For this section we will consider costs of the form $h(x,m) = F_1(x) + \\log(m)$, where $F_1(x)$ will be a double well potential (see figure~\\ref{fig:double pots}). Since the key insight of this section is to understand how varying $F_1$ affects the similarity of solutions to the BRS and MFG models, we have decided not to include results with different congestion terms other than the logarithm. From simulations it can be seen that the effect of changing the congestion term from $\\log(m)$ to another term is very similar whether we are considering a single well or a double well.\n\nOur simulations focus on five different double wells, which can be seen in figure~\\ref{fig:double pots}. We vary the potentials as follows\n\\begin{enumerate}[topsep=0pt,itemsep=0pt,partopsep=2pt,parsep=1pt]\n \\item same depth, same width,\n \\item different depth, same width,\n \\item same depth, different width,\n \\item approximately similar perimeter,\n \\item approximately similar volume.\n\\end{enumerate}\n\nThe simulations with the first two potentials, see figures~\\ref{fig:double well 1} and~\\ref{fig:double well 2}, where the widths of the two wells are always the same, show that the two models display similar qualitative behaviour. As with the single well potential, as $\\sigma$ increases the discrepancy between the two models also increases, with the MFG model being more affected by the level of noise than the BRS. As expected, when the two wells are of equal depth (as in figure~\\ref{fig:double well 1}) then both the MFG and BRS attribute equal weight between the wells, while when one well is deeper than the other (as in figure~\\ref{fig:double well 2}) both models give more mass to the location of the deeper well. This is true for all values of $\\sigma$, although the effect reduces as $\\sigma$ increases, as can be seen by comparing figures~\\ref{fig:double well 1a} and~\\ref{fig:double well 2a} to figures~\\ref{fig:double well 1b} and~\\ref{fig:double well 2b} respectively.\n\nUp to this point we have seen that the qualitative behaviour of the two models tends to agree. However when we look at double well potentials where the width of the well is varying, we start to observe differences. In figure~\\ref{fig:double well 3}, where the wells have the same depth but different width, we see that the BRS still distributes density equally to the two wells. However the MFG model results in a higher density focussed in the wider well than in the narrower well. The reason the width has no effect on the BRS can be seen from studying the implicit equation. In each well we are solving $m = \\frac{1}{Z} e^{- \\frac{2}{\\sigma^2} h(x,m)}$. In our case $h(x,m) = G(x) + F(m)$. Since the potential $G(x)$ is at the same depth in each well then the relative height of the distribution $m$ will be the same in each well. The reason the MFG is affected by the width of the of the well is that in finding the MFG solution we are in fact solving an elliptic equation to find the function $u$, hence at each point $x$ this $u$ will be affected by factors that can't be described by just looking at the value of $h$ at that point. In other words, the BRS depends only on local properties of the cost $h$ whereas the MFG depends also on non-local properties. To understand why the MFG assigns greater density to the wider well we need to look at the underlying optimisation problems related to the MFG and the BRS.\n\n\\begin{figure}[t]\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-09.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 0.2$}\n \t\\label{fig:double well 1a}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-10.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 1$}\n \t\\label{fig:double well 1b}\n\t\\end{subfigure}\n\t\\caption{Simulation of BRS and MFG with logarithmic congestion and potential given in figure~\\ref{fig:double pot 1}}\n\t\\label{fig:double well 1}\n\\end{figure}\n\\begin{figure}[t]\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-11.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 0.2$}\n \t\\label{fig:double well 2a}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-12.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 1$}\n \t\\label{fig:double well 2b}\n\t\\end{subfigure}\n\t\\caption{Simulation of BRS and MFG with logarithmic congestion and potential given in figure~\\ref{fig:double pot 2}}\n\t\\label{fig:double well 2}\n\\end{figure}\n\nWhen considering the optimisation problems related to the dynamic MFG and BRS models and the long-term behaviour of these models, which results in the stationary models, we can see that the BRS model has no anticipation about the future system, while the MFG model does. Therefore agents in the MFG model are willing to incur higher congestion costs in the wider well as they can see that the cost to move out of the well to an area of lower congestion will be higher than the cost incurred for staying in the well (the cost functional being optimised has a quadratic running cost on the control). Since the cost for moving out of the well increases with the width of the well (as the wider the well either the longer an agent has to use their control, or the larger their control has to be), fewer agents are willing to move out of the wider well than the narrower in the long-run. Hence in the stationary case the wider well has a higher density associated to it than the narrower well. In contrast, we see that in going from the MFG to the BRS we renormalise the cost of the control by $\\Delta t$ and take $\\Delta t \\to 0$ so the BRS doesn't consider the cost of moving along the width of the well in order to find an area of lower density. Therefore the BRS agents will not consider the width of the well when deciding whether to remain in it or leave. This further explains why the width of the well has no effect on the relative size of the density in each well for the BRS. \n\nWe have seen that increasing well width affects only the MFG while increasing well depth affects both the MFG and BRS. Now we can balance these effects to create situations in which the two models give completely different results. Figure~\\ref{fig:double well 4} involves a double well where the width and depth of the wells differ but the area of the well is the same, while in figure~\\ref{fig:double well 5} the perimeter of the wells was kept the same. In the case of a small noise term, then both the MFG and BRS favour the deeper wells. However with a larger noise term, figure~\\ref{fig:double well 4b} shows that there are cases where the wider shallower well is favoured by the MFG while the narrower, deeper well is favoured by the BRS.\n\n\\begin{figure}[t]\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-13.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 0.2$}\n \t\\label{fig:double well 3a}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-14.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 1$}\n \t\\label{fig:double well 3b}\n\t\\end{subfigure}\n\t\\caption{Simulation of BRS and MFG with logarithmic congestion and potential given in figure~\\ref{fig:double pot 3}}\n\t\\label{fig:double well 3}\n\\end{figure}\n\t\n\\begin{figure}[t]\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-15.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 0.2$}\n \t\\label{fig:double well 4a}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-16.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 1$}\n \t\\label{fig:double well 4b}\n\t\\end{subfigure}\n\t\\caption{Simulation of BRS and MFG with logarithmic congestion and potential given in figure~\\ref{fig:double pot 4}}\n\t\\label{fig:double well 4}\n\\end{figure}\n\n\\section{Conclusion and outlook} \\label{sec:conclusion}\n\n\\noindent In this paper we have systematically compared two models of interacting multi-agent systems in the stationary case. Through a proof of existence and uniqueness for each model we have seen that the BRS model can be reformulated as an implicit equation. This shows that the BRS model really only depends on local data of the cost function, while the MFG model, the solution of which is given by an elliptic equation, may have non-local dependenicies on the data. The existence and uniqueness proofs were based on the important assumption that the congestion term is increasing. However, the regularity requirements on the MFG data are less strict than those on the BRS data. Finally the proof gave an insight into the dependence of each model on the diffusion coefficient. We want to remark that the strategy of the proof is interesting on its own and that the only similar results presented in ~\\cite{Cirant2015}, are based on different assumptions.\\\\\nWe supported our analytic results by numerical simulations and investigated the similarities and differences of the MFG and BRS models systematically in various computational experiments. We are planning to extend the analysis and simulations to the dynamic case in the future, and consider cost functions other than linear-quadratic ones.\n\n\\begin{figure}[t]\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-17.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 0.2$}\n \t\\label{fig:double well 5a}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.49 \\textwidth}\n \\centering\n\t\t\\includegraphics[width=\\textwidth]{brs-mfg-18.jpg}\n \t\\subcaption{$\\frac{\\sigma^2}{2} = 1$}\n \t\\label{fig:double well 5b}\n\t\\end{subfigure}\n\t\\caption{Simulation of BRS and MFG with logarithmic congestion and potential given in figure~\\ref{fig:double pot 5}}\n\t\\label{fig:double well 5}\n\\end{figure}\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\nClusters of galaxies are the most massive, individual, bound objects in the Universe.\nIn their gravitational potential well, the gas, called the intra-cluster medium (ICM),\nis heated to temperatures of $10^{7-8}$\\,K and, therefore, strongly emits at X-ray energies.\nThe ICM is commonly thought to be in hydrostatic equilibrium, but there are several\nfactors that may affect the dynamical state of the gas. The feedback from active galactic nuclei (AGN) creates bubbles \nthat may drive turbulence up to about 500\\,km\\,s$^{-1}$ \n(see, e.g., \\citealt{Bruggen2005} and \\citealt{Fabian2005}). \nSloshing of gas within the gravitational potential may produce similar velocities,\nwhile galactic mergers can give rise to even higher velocities of about 1000\\,km\\,s$^{-1}$ \n(see, e.g., \\citealt{Lau2009}, \\citealt{Ascasibar2006})\n\n{The AGN feedback is thought to offset radiative losses and \nto suppress cooling in isolated giant elliptical galaxies \nand in larger systems up to the richest galaxy clusters (see, e.g., \\citealt{McNamara2007}\nand \\citealt{Fabian2012}).\nSimulations and observations have confirmed that AGN feedback \nmay prevent cooling through the production of turbulence \n(see, e.g., \\citealt{Ruszkowski2004}, \\citealt{Zhuravleva2014}, and \\citealt{Gaspari2014}). \nOther work suggests that turbulent mixing may also be an important mechanism \nthrough which AGN heat cluster cores (see, e.g., \\citealt{Banerjee2014}).}\n\n\\vspace{-0.05cm}\n\nIt is possible to measure velocity broadening on the order of few hundreds km\\,s$^{-1}$ directly in the X-ray emission lines\nproduced by the hot ICM. The Reflection Grating Spectrometers (RGS, \\citealt{denherder2001}) aboard \nXMM-\\textit{Newton} are currently the only X-ray instruments, which have enough collecting area and spectral resolution\nto enable this measurement.\nHowever, the spatial extent of clusters complicates the process due to the slitless\nnature of the RGS.\n\\citet{Sanders2010} made the first measurement of cluster velocity broadening\nusing the luminous cluster \\object{A\\,1835} at redshift 0.25. Due to the limited spatial extent \nof its bright core, an upper limit of 274\\,km\\,s$^{-1}$ was obtained.\n\\citet{Sanders2011} then constrained turbulent velocities for a large sample\nof 62 sources observed with XMM-\\textit{Newton}\/RGS, which included clusters, groups, and elliptical galaxies.\nHalf of them show velocity broadening below 700\\,km\\,s$^{-1}$. Recently, \n\\citet{Sanders2013} used continuum-subtracted emission line surface brightness\nprofiles to account for the spatial broadening. This technique is affected by systematic errors of\nup to 150\\,km\\,s$^{-1}$.\n\n\\citet{Werner2009} and \\citet{dePlaa2012} measured turbulent velocities through the ratio of the \n\\ion{Fe}{xvii} emission lines at 15 and 17\\,{\\AA}. When the velocity broadening is low, the gas is optically thick \nin the 15\\,{\\AA} line due to resonant scattering, while the 17\\,{\\AA} lines remain optically thin. The comparison\nof observed with simulated line ratios for different Mach numbers constrains the level of turbulence.\nThis method is very efficient for cool core clusters rich in \\ion{Fe}{xvii} emission lines, but it is partly\nlimited by the systematic uncertainty ($\\sim$20\\%) in the line ratio for an optically thin plasma.\n\nIn this work, we measure the velocity broadening\nfor the 44 sources of the CHEmical Enrichment RGS cluster Sample (CHEERS),\nwhich is connected to a Very Large Program accepted for XMM-\\textit{Newton} AO-12. \nWe model the line spatial broadening using CCD images.\nThis method has systematics due to the spatial profile \nof the continuum, which may overestimate the line spatial broadening,\nbut it is still a useful technique to measure the level of velocity broadening\nwhen deep, high-spatial resolution maps are lacking. \nWe also test an alternative method, \nwhich uses a variable spatial-broadening.\nThe paper is organized as follows. \nIn Sect.\\,\\ref{sec:cheers}, we give a brief description of the CHEERS project. \nIn Sect.\\,\\ref{sec:data}, we present the data reduction. \nOur method is described in Sect.\\,\\ref{sec:spectral_modeling}. \nWe discuss the results in Sect.\\,\\ref{sec:discussion} and \ngive our conclusions in Sect.\\,\\ref{sec:conclusion}.\nFurther useful material is reported in Appendix\\,\\ref{sec:appendix}\nto speed up the paper reading.\n\n\\vspace{-0.35cm}\n\n\\section{The CHEERS project}\n\\label{sec:cheers}\n\nThe current catalog includes 44 nearby, bright clusters, groups of galaxies, and elliptical galaxies\nwith a value of a $\\gtrsim5\\sigma$ detection for the \\ion{O}{viii} 1s--2p line at 19\\,{\\AA}\nand with a well-represented variety of strong, weak, and non cool-core objects. \nThis catalog also contains 19 new observations of 1.6\\,Ms in total, which are taken during AO-12, \nPI: J. de Plaa (see Table~\\ref{table:log}). \nMore detail on the sample choice is provided by another paper \n(de Plaa et al., in preparation). \nAmong the several goals of this large project, we mention the following ones:\n\\vspace{-0.2cm}\n\\begin{itemize}\n \\item To understand the ICM metal enrichment by different SN types, \\\n (see, e.g., Mernier et al. accepted) \n \\item to study substructures, asymmetries and multiphaseness,\n \\item to study heating and cooling in cluster cores,\n \\item to measure turbulence (this paper),\n \\item to improve the cross-calibration between X-ray satellites.\n\\end{itemize}\n\n\\vspace{-0.5cm}\n\n\\section{Data}\n\\label{sec:data}\n\nThe data used in this paper are listed in Table~\\ref{table:log}. \nIn boldface, we show the new observations\ntaken during AO-12. A few archival exposures have not been used, \nsince they were too short. \n\nThe XMM-\\textit{Newton} satellite is equipped with two types of X-ray detectors: \nThe CCD-type European Photon Imaging Cameras (EPIC) and the Reflection Grating Spectrometers (RGS). \nThe European photon imaging cameras are\nMOS\\,1, MOS\\,2, and pn (\\citealt{Struder2001} and \\citealt{Turner2001}). \nThe RGS camera consists of two similar detectors, which have both high effective area and \nspectral resolution between 6 and 38\\,{\\AA} \\citep{denherder2001}.\nThe MOS cameras are aligned with the RGS detectors and have \nhigher spatial resolution than the pn camera. \nWe have used MOS\\,1 for imaging and RGS for spectral analysis.\n\n\n\\subsection{RGS and MOS 1 data reduction}\n\nThe data were reduced with the XMM-\\textit{Newton} Science Analysis System (SAS) v13.5.0. \nWe processed the RGS data \nwith the SAS task \\textit{rgsproc} and the MOS\\,1 data with \\textit{emproc} \n{to produce event files, spectra, and response matrices for RGS and MOS data}.\n\nTo correct for contamination from soft-proton flares, we {used the SAS task \\textit{evselect}\nto extract} light curves for MOS\\,1 in the 10--12 keV\nenergy band, while we used the data from CCD number 9 for RGS \nwhere hardly any emission from each source is expected. We\nbinned the light curves in 100\\,s intervals. A Poissonian distribution was fitted to the\ncount-rate histogram, and all time bins outside the $2\\sigma$ level were rejected.\nWe built the good time intervals (GTI) files with the accepted time events for the MOS and RGS files \n{through the SAS task \\textit{tabgtigen} and \nreprocessed the data again with \\textit{rgsproc} and \\textit{emproc}}. \nThe RGS\\,1 total clean exposure times are quoted in Table\\,\\ref{table:log}.\n\n\\begin{table*}\n\\caption{XMM-\\textit{Newton}\/RGS observations used in this paper.} \n\\vspace{-0.25cm}\n\\label{table:log} \n\\renewcommand{\\arraystretch}{1.1}\n \\small\\addtolength{\\tabcolsep}{+2pt}\n \n\\scalebox{1}{%\n\\begin{tabular}{c c c c c c c } \n\\hline\\hline \nSource & ID $^{(a)}$ & Total clean time (ks) $^{(b)}$ & $kT$ (keV) $^{(c)}$ & $z\\,^{(c)}$ & $N_{\\rm H}$ ($10^{24}\\,{\\rm m}^{-2}$) $^{(d)}$\\\\ \n\\hline \n\\multirow{1}{*}{\\object{2A0335+096}} & 0109870101\/0201 0147800201 & 120.5 & 3.0 & 0.0349 & 30.7 \\\\\n\\multirow{1}{*}{\\object{A 85}} & \\textbf{0723802101\/2201} & 195.8 & 6.1 & 0.0556 & 3.10 \\\\\n\\multirow{1}{*}{\\object{A 133}} & 0144310101 \\textbf{0723801301\/2001} & 168.1 & 3.8 & 0.0569 & 1.67 \\\\\n\\multirow{1}{*}{\\object{A 189}} & 0109860101 & 34.7 & 1.3 & 0.0320 & 3.38 \\\\\n\\multirow{1}{*}{\\object{A 262}} & 0109980101\/0601 0504780101\/0201 & 172.6 & 2.2 & 0.0161 & 7.15 \\\\\n\\multirow{1}{*}{\\object{A 496}} & 0135120201\/0801 0506260301\/0401 & 141.2 & 4.1 & 0.0328 & 6.00 \\\\\n\\multirow{1}{*}{\\object{A 1795}} & 0097820101 & 37.8 & 6.0 & 0.0616 & 1.24 \\\\\n\\multirow{1}{*}{\\object{A 1991}} & 0145020101 & 41.6 & 2.7 & 0.0586 & 2.72 \\\\\n\\multirow{1}{*}{\\object{A 2029}} & 0111270201 0551780201\/0301\/0401\/0501 & 155.0 & 8.7 & 0.0767 & 3.70 \\\\\n\\multirow{1}{*}{\\object{A 2052}} & 0109920101 0401520301\/0501\/0601\/0801 & 104.3 & 3.0 & 0.0348 & 3.03 \\\\\n & 0401520901\/1101\/1201\/1301\/1601\/1701 & & & & \\\\\n\\multirow{1}{*}{\\object{A 2199}} & 0008030201\/0301\/0601 \\textbf{0723801101\/1201} & 129.7 & 4.1 & 0.0302 & 0.909 \\\\\n\\multirow{1}{*}{\\object{A 2597}} & 0108460201 0147330101 \\textbf{0723801601\/1701} & 163.9 & 3.6 & 0.0852 & 2.75 \\\\\n\\multirow{1}{*}{\\object{A 2626}} & 0083150201 0148310101 & 56.4 & 3.1 & 0.0573 & 4.59 \\\\\n\\multirow{1}{*}{\\object{A 3112}} & 0105660101 0603050101\/0201 & 173.2 & 4.7 & 0.0750 & 1.38 \\\\\n\\multirow{1}{*}{\\object{A 3526}} & 0046340101 0406200101 & 152.8 & 3.7 & 0.0103 & 12.2 \\\\\n\\multirow{1}{*}{\\object{A 3581}} & 0205990101 0504780301\/0401 & 123.8 & 1.8 & 0.0214 & 5.32 \\\\\n\\multirow{1}{*}{\\object{A 4038}} & 0204460101 \\textbf{0723800801} & 82.7 & 3.2 & 0.0283 & 1.62 \\\\\n\\multirow{1}{*}{\\object{A 4059}} & 0109950101\/0201 \\textbf{0723800901\/1001} & 208.2 & 4.1 & 0.0460 & 1.26 \\\\\n\\multirow{1}{*}{\\object{AS 1101}} & 0147800101 0123900101 & 131.2 & 3.0 & 0.0580 & 1.17 \\\\\n\\multirow{1}{*}{\\object{AWM 7}} & 0135950301 0605540101 & 158.7 & 3.3 & 0.0172 & 11.9 \\\\\n\\multirow{1}{*}{\\object{EXO 0422}} & 0300210401 & 41.1 & 3.0 & 0.0390 & 12.4 \\\\\n\\multirow{1}{*}{\\object{Fornax}} & 0012830101 0400620101 & 123.9 & 1.2 & 0.0046 & 1.56 \\\\\n\\multirow{1}{*}{\\object{HCG 62}} & 0112270701 0504780501 0504780601 & 164.6 & 1.1 & 0.0140 & 3.76 \\\\\n\\multirow{1}{*}{\\object{Hydra-A}} & 0109980301 0504260101 & 110.4 & 3.8 & 0.0538 & 5.53 \\\\\n\\multirow{1}{*}{\\object{M 49}} & 0200130101 & 81.4 & 1.0 & 0.0044 & 1.63 \\\\\n\\multirow{1}{*}{\\object{M 86}} & 0108260201 & 63.5 & 0.7 & -0.0009 & 2.97 \\\\\n\\multirow{1}{*}{\\object{M 87} (Virgo)} & 0114120101 0200920101 & 129.0 & 1.7 & 0.0042 & 2.11 \\\\\n\\multirow{1}{*}{\\object{M 89}} & 0141570101 & 29.1 & 0.6 & 0.0009 & 2.96 \\\\\n\\multirow{1}{*}{\\object{MKW 3s}} & 0109930101 \\textbf{0723801501} & 145.6 & 3.5 & 0.0450 & 3.00 \\\\\n\\multirow{1}{*}{\\object{MKW 4}} & 0093060101 \\textbf{0723800601\/0701} & 110.3 & 1.7 & 0.0200 & 1.88 \\\\\n\\multirow{1}{*}{\\object{NGC 507}} & \\textbf{0723800301} & 94.5 & 1.3 & 0.0165 & 6.38 \\\\\n\\multirow{1}{*}{\\object{NGC 1316}} & 0302780101 0502070201 & 165.9 & 0.6 & 0.0059 & 2.56 \\\\\n\\multirow{1}{*}{\\object{NGC 1404}} & 0304940101 & 29.2 & 0.6 & 0.0065 & 1.57 \\\\\n\\multirow{1}{*}{\\object{NGC 1550}} & 0152150101 \\textbf{0723800401\/0501} & 173.4 & 1.4 & 0.0123 & 16.2 \\\\\n\\multirow{1}{*}{\\object{NGC 3411}} & 0146510301 & 27.1 & 0.8 & 0.0152 & 4.55 \\\\\n\\multirow{1}{*}{\\object{NGC 4261}} & 0056340101 0502120101 & 134.9 & 0.7 & 0.0073 & 1.86 \\\\\n\\multirow{1}{*}{\\object{NGC 4325}} & 0108860101 & 21.5 & 1.0 & 0.0259 & 2.54 \\\\\n\\multirow{1}{*}{\\object{NGC 4374}} & 0673310101 & 91.5 & 0.6 & 0.0034 & 3.38 \\\\\n\\multirow{1}{*}{\\object{NGC 4636}} & 0111190101\/0201\/0501\/0701 & 102.5 & 0.8 & 0.0037 & 2.07 \\\\\n\\multirow{1}{*}{\\object{NGC 4649}} & 0021540201 0502160101 & 129.8 & 0.8 & 0.0037 & 2.23 \\\\\n\\multirow{1}{*}{\\object{NGC 5044}} & 0037950101 0584680101 & 127.1 & 1.1 & 0.0090 & 6.24 \\\\\n\\multirow{1}{*}{\\object{NGC 5813}} & 0302460101 0554680201\/0301\/0401 & 146.8 & 0.5 & 0.0064 & 6.24 \\\\\n\\multirow{1}{*}{\\object{NGC 5846}} & 0021540101\/0501 \\textbf{0723800101\/0201} & 194.9 & 0.8 & 0.0061 & 5.12 \\\\\n\\multirow{1}{*}{\\object{Perseus}} & 0085110101\/0201 0305780101 & 162.8 & 6.8 & 0.0183 & 20.7 \\\\\n\\hline \n\\end{tabular}}\n\n$^{(a)}$ Exposure ID number. $^{(b)}$ RGS net exposure time. \n$^{(c)}$ Redshifts and temperatures are adapted from \\cite{Chen2007} and \\cite{Snowden2008}. \n$^{(d)}$ Hydrogen column density (see http:\/\/www.swift.ac.uk\/analysis\/nhtot\/).\nNew observations from our proposal are shown in boldface.\\\\\n \\vspace{-0.5cm}\n\\end{table*}\n\n\n\\subsection{RGS spectra extraction}\n\\label{sec:rgs_regions}\n\nWe extracted the RGS source spectra in two alternative regions centered on the emission peak:\na broader 3.4' region, which includes most of the RGS field of view and a narrower 0.8' region that provides\nthe cluster cores but with high statistics. {This was done by launching \\textit{rgsproc} twice\nby setting the \\textit{xpsfincl} mask to include 99\\% and 90\\% of point-source events \ninside the spatial source extraction mask, respectively.} We have used the model background spectrum \ncreated by the standard RGS \\textit{rgsproc} pipeline, which is a template background file,\nbased on the count rate in CCD\\,9. \nThe RGS spectral extraction regions and the MOS\\,1 image of M\\,87 are shown in Fig.~\\ref{fig:rgs_regions}. \nThe spectra were converted to SPEX\\footnote{www.sron.nl\/spex} format through the SPEX task \\textit{trafo}.\n{During the spectral conversion, we chose the option of \\textit{sectors} in the task \\textit{trafo}\nto create as many sectors as the different exposures of each source. This permits us to simultaneously\nfit the multiple RGS spectra of each source by choosing which parameters to either couple or unbind \nin the spectral models of different observations.}\n\n\\begin{figure}\n \\begin{center}\n \\subfigure{ \n \\includegraphics[bb=15 15 515 348, width=7.5cm]{ds9_M87_2_edited.ps}}\n \\vspace{-0.25cm}\n \\caption{RGS extraction regions and MOS\\,1 stacked image of M\\,87.}\n \\label{fig:rgs_regions}\n \\end{center}\n \\vspace{-0.5cm}\n\\end{figure}\n\n \\vspace{-0.5cm}\n\n\\subsection{MOS 1 spatial broadening profiles}\n\\label{sec:spatial_profile}\n\nThe RGS spectrometers are slitless, and, therefore,\nthe spectra are broadened because of the spatial extent of the source in the dispersion direction. \nThe effect of this spatial broadening is described by the following wavelength shift\n\\begin{equation}\n\\Delta\\lambda = \\frac{0.138}{m} \\, \\Delta\\theta \\, {\\mbox{\\AA}},\n\\end{equation}\nwhere $m$ is the spectral order and $\\theta$ is the offset angle of the source in arcmin\n(see the XMM-\\textit{Newton} Users Handbook).\n\nThe MOS\\,1 DET\\,Y direction is parallel to the RGS\\,1 dispersion direction \nand can be used to correct for the spatial broadening. \n{With \\textit{evselect}}, we extracted MOS\\,1 images for each exposure in the 0.5--1.8\\,keV\n(7--25\\,{\\AA}) energy band and their surface brightness profiles in the dispersion\ndirection.\nWe account for spatial broadening using the \\textit{lpro} multiplicative model in SPEX,\nwhich convolves the RGS response with our model of the spatial extent of the source. \nWe show some cumulative profiles of spatial broadening in Fig.~\\ref{fig:profiles}.\nWe have also produced stacked Fe-L band (10--14\\,{\\AA}) images for each source. \nThe central 10' region contains most of the cluster emission \n(see Fig.~\\ref{fig:mos1}). \n\n\\begin{figure}\n \\begin{center}\n \\subfigure{ \n \\includegraphics[bb=66 66 536 707, width=6cm, angle=90]{xxx_IDL_MOS1_comparing_profiles_overplots_sources.ps}}\n \\vspace{-0.3cm}\n \\caption{MOS\\,1 average 7--25\\,{\\AA} cumulative spatial profiles.}\n \\label{fig:profiles}\n \\end{center}\n \\vspace{-0.4cm}\n\\end{figure}\n\n\\vspace{-0.4cm}\n\n\n\\section{Spectral modeling}\n\\label{sec:spectral_modeling}\n\nOur analysis focuses on the $7-28$ {\\AA} ($0.44-1.77$ keV) first and second order RGS spectra.\nWe model the spectra with SPEX\nversion 2.03.03. \nWe scale elemental abundances to the proto-Solar values \nof \\citet{Lodders09}, which are the default in SPEX. \nWe adopt C-statistics and $1\\,\\sigma$ errors throughout the paper,\nunless otherwise stated, and the updated ionization balance calculations of \\citet{Bryans2009}.\n\n\nClusters of galaxies are not isothermal, and most of them have both hot and cool gas phases \n(see e.g. \\citealt{Frank2013}). Therefore, we have used a two-temperature thermal plasma model\nof collisional ionization emission (CIE). This model is able to fit all the spectra in our database. \nThe \\textit{cie} model in SPEX calculates the spectrum of a plasma \nin collisional ionization equilibrium. The basis for this\nmodel is given by the mekal model, but several updates have been included (see the SPEX manual).\nFree parameters in the fits are the emission measure $Y=n_{\\rm e}\\,n_{\\rm H}\\,dV$, the temperature $T$, \nthe abundances (N, O, Ne, Mg, and Fe), and the turbulent broadening $v$ of the two \\textit{cie} models.\n\nWe bind the parameter $v$ and the abundances of two \\textit{cie} components with each other\nand assume that the gas phases have the same turbulence and abundances. \nThis decreases the degree of degeneracy. \n{This assumption is certainly not true, but some clusters just need one CIE component\nand the spectra of several clusters do not have good enough statistics in both high- and low-ionization emission lines,\nwhich prohibits constraining the velocities and the abundances for both hot and cool phases.\nWe attempt to constrain the turbulence in the different phases in Sect.\\,\\ref{sec:temperature}.}\n\nThe \\textit{cie} models are multiplied by a \\textit{redshift} model \nand a model for Galactic absorption, which is provided by the \\textit{hot} model in SPEX \nwith $T=0.5$\\,eV and $N_{\\rm H}^{TOT}$, as estimated through the tool of \\citet{Willingale2013}. \nThis tool includes the\ncontribution to absorption from both atomic and molecular hydrogen. The redshifts and\ncolumn densities that have been adopted are shown in Table~\\ref{table:log}.\nTo correct for spatial broadening, we have multiplied the spectral model \nby the \\textit{lpro} component that receives as input the surface brightness profile\nextracted in the MOS\\,1 images (see Sect.\\,\\ref{sec:spatial_profile} and Fig.\\,\\ref{fig:profiles}).\n\nWe do not explicitly model the cosmic X-ray background in the RGS spectra \nbecause any diffuse emission feature \nwould be smeared out into a broad continuum-like component. \n\nFor a few sources, including the Perseus and \\object{Virgo} (M\\,87) clusters, \nwe have added a further power-law emission component\nof slope $\\sim2$ to take the emission from the central AGN into account.\nThis component is not convolved with the spatial profile because \nit is produced by a central point-like source.\n\nTo avoid the systematic effects due to the stacking of multiple observations\nwith different pointing,\nwe have simultaneously fitted the individual spectra\nof each source extracted in the two regions defined in Sect.~\\ref{sec:rgs_regions} \nand shown in Fig.\\,\\ref{fig:rgs_regions}. \nThe plasma model is coupled between the observations. \nThe only uncoupled parameters are the emission measures \nof the two collisional-ionized gas components. \nFor each observation we adopt the spatial profile extracted in the MOS\\,1 image taken during that exposure.\nFor those exposures, during which the MOS\\,1 detector had a closed filter, \nwe have adopted an exposure-weighted average profile as given by the other available observations, \nbut the $\\delta\\lambda$ parameter in the \\textit{lpro} component is left free. \nThis factor allows us to shift the model lines by the same amount (in {\\AA}) for each specific spectrum\nand strongly decreases the systematic effects.\nThe $\\delta\\lambda$ parameter is always free in our fits to account for any redshift variation,\nwhich would otherwise affect the line modeling (see e.g. \\citealt{Sanders2011}).\n\nThe simultaneous modeling of multiple observations has been done through the use\nof the \\textit{sectors} option in SPEX (see also Sect.\\,\\ref{sec:rgs_regions}).\nThe RGS\\,1 and 2 spectra of the same observation\nhave exactly the same model and provide a single sector, while RGS spectra of other observations\ncontribute additional sectors and have the \\textit{cie} normalizations uncoupled.\n\n\\subsection{{Results using a fixed spatial broadening}}\n\\label{sec:results}\n\nWe have successfully applied this multi-temperature model to both the 3.4' and 0.8' RGS spectra.\nWe show the spectral modeling for the 3.4' region of the 44 sources \nin Figs.~\\ref{fig:rgs_fits}, \\ref{fig:rgs_fits2}, and \\ref{fig:rgs_fits3} in Appendix\\,\\ref{sec:appendix}. \nWe display the first-order stacked spectra to underline the high quality of these observations\nand to show the goodness of the modeling.\n\nFor some sources like Fornax, M\\,49, M\\,86, NGC\\,4636, and NGC\\,5813, \nthe 15 and 17\\,{\\AA} \\ion{Fe}{xvii} emission lines are not well fitted. \nPrecisely, the model underestimates the line peaks and overestimates the broadening.\nThis may be due to the different spatial distribution of the gas responsible for the cool \\ion{Fe}{xvii} \nemission lines and for the one producing most of the high-ionization Fe-L and \\ion{O}{viii} lines. \nThe cool gas is indeed to be found predominantly in the center of the clusters \nshowing a profile more peaked than that one of the hotter gas. \nThe estimated spatial profiles depend on the emission of the hotter gas due to its higher emission measure,\nand, therefore, they overestimate the spatial broadening of the 15--17\\,{\\AA} lines.\nIt is hard to extract a spatial profile for these lines because MOS\\,1 has a limited spectral resolution,\nand the images extracted in such a short band lack the necessary statistics \n(see e.g. \\citealt{Sanders2013}). {In Sect.\\,\\ref{sec:temperature}, we attempt to constrain the turbulence\nfor lines of different ionization states.} The 15\\,{\\AA}\\,\/\\,17\\,{\\AA} line ratio is also affected \nby resonant scattering, which would require a different approach. \nWe refer to a forthcoming paper on the analysis of the resonant scattering in the CHEERS sources.\n\nWe skip the discussion of the abundances and the supernova yields \nbecause these will be treated by other papers \nof this series (de\\,Plaa et al. in preparation and Mernier et al. submitted).\n\nIn Fig.\\,\\ref{fig:turbulence2} (\\textit{\\textit{left panel}}), we show the upper limits on the velocity broadening obtained \nwith the simultaneous fits of the 0.8' 7--28\\,{\\AA} RGS spectra.\nWe obtain upper limits for most clusters, while \nNGC\\,507 shows high kinematics. {More detail on our results for the 3.4' and 0.8' regions \nand their comparison are reported in Table\\,\\ref{table:velocity_results} and Fig.\\,\\ref{fig:velocities_comparison} (\\textit{\\textit{left panel}}).\nThe 3.4' limits are more affected by the source continuum,\nas clearly seen for M\\,87, AWM\\,7, and A\\,4038,\nwhich makes them less reliable.\n\n\\begin{figure*}\n \\begin{center}\n \\subfigure{ \n \\includegraphics[bb=110 77 535 723, width=9cm]{CHEERS_turbulence_reg0_ABC_bryans09_AVG_10am_sectors_1sigma_sectors_all_cut_edited.txt.ps} \\hspace{0cm} \n \\includegraphics[bb=110 77 535 723, width=9cm]{CHEERS_turbulence_reg0_ABC_bryans09_AVG_10am_sectors_1sigma_sectors_all_cut_edited_sfree.txt.ps} }\n \\vspace{-0.2cm}\n \\caption{\\textit{Left panel}: Velocity 68\\% (red) and 90\\% (green) limits for the 0.8' region\n with the spatial broadening determined with MOS\\,1 images (see Sect.\\,\\ref{sec:results}). \n \\textit{Right panel}: Velocity limits obtained using the best-fit spatial broadening\n (see Sect\\,\\ref{sec:results_combined}).}\n \\label{fig:turbulence2}\n \\end{center}\n \\vspace{-0.6cm}\n\\end{figure*}\n\n\\subsection{{Results using the best-fit spatial broadening}}\n\\label{sec:results_combined}\n\nIt is known that the spatial profile of the source continuum may be\nbroader than the spatial distribution of the lines. The MOS\\,1 \nimages are strongly affected by the profile of the source continuum\nand, therefore, may overestimate the spatial line broadening\nand underestimate the residual velocity broadening. \nFor instance, NGC\\,1316 and NGC\\,5846 \nshow $1\\sigma$ limits of 20\\,km\\,s$^{-1}$, which are not realistic (see Table\\,\\ref{table:velocity_results}).\n\nTo obtain more conservative limits, we have simultaneously modeled\nthe spatial and the velocity broadening. This was done by fitting the RGS 0.8' spectra\nwith a free \\textit{s} parameter in the \\textit{lpro} component.\nThis factor simply scales the width of the spatial broadening \nby a factor free to vary (see the SPEX manual).\nThe free \\textit{s} parameter increases the degeneracy in the model \nbut provides conservative upper limits on the residual velocity broadening,\nwhich is measured with the $v$ parameter of the \\textit{cie} component.\nThe new limits on the velocities are plotted in Fig.\\,\\ref{fig:turbulence2} (\\textit{right panel})\nand quoted in the last two columns of Table\\,\\ref{table:velocity_results}.\nIn Fig.\\,\\ref{fig:velocities_comparison} (\\textit{right panel}), we compare the velocity upper limits\nestimated with the standard method (MOS\\,1 spatial profile with $s\\equiv1$ in the \\textit{lpro} component)\nwith this new approach using a free $s$ parameter. They generally agree,\nbut the new upper limits on the hotter Abell clusters\nare systematically larger by an average factor $\\sim2$. \nThis confirms that some of the previous velocity limits were \nunderestimated due to the broader spatial profiles.\nTherefore, we believe the new upper limits to be the most conservative.\nInterestingly, the conservative velocity limits of the hot clusters \nare generally higher than the cool galaxy groups with the exception of NGC\\,507,\nwhich is expected since the sound speed scales as a power\nof the temperature (see Sect.\\,\\ref{sec:turbulence}).\n\n \\vspace{-0.4cm}\n\n\\subsection{Further tests}\n\\label{sec:tests}\n\nTo estimate the contribution of the spatial broadening to the line widths,\nwe have temporarily removed the convolution of the spectral model \nfor the spatial profile and re-fitted the data. \nIn these fits the $v$ parameter of the \\textit{cie} component \naccounts for any contribution to the line broadening. \nThe total (spatial + velocity) widths are also quoted in Table\\,\\ref{table:velocity_results}.\n\nWe have also tested the continuum-subtracted line surface brightness profiles \nintroduced by \\cite{Sanders2013}. This new method consists of subtracting the surface brightness profiles\nof two regions that are clearly line-dominated (core) and continuum-dominated (outskirts).\nIt can be applied only to those objects with a narrow core \nwhere it is possible to distinguish between line-rich and line-poor regions.\nWe have locally fitted the \\ion{O}{viii} 19.0\\,{\\AA} emission line of \nA\\,2597, A\\,3112, Hydra-A, Fornax (\\object{NGC\\,1399}), and NGC\\,4636 \nand we have found a general agreement with the results of \\cite{Sanders2013}.\nHowever, our MOS\\,1 images have much lower spatial resolution than the \\textit{Chandra} maps\nused by them, which increases the uncertainties that are present in this method. \nA thorough, extensive, analysis would require deep \\textit{Chandra} maps \nthat are not yet available.\n\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nIn this work we have analyzed the data of 44 clusters, groups of galaxies, and elliptical galaxies\nincluded in the CHEERS project, a Very Large Program that was accepted \nfor XMM-\\textit{Newton} AO-12 (see Sect.\\,\\ref{sec:cheers}) \ntogether with complementary archival data.\n\nWe have measured upper limits of velocity broadening for these objects\nwith a method similar to the previous one used by \\citet{Bulbul2012} and \\citet{Sanders2013}. \nThis consists of fitting high-quality grating spectra\nby removing the spatial broadening through surface brightness profiles of the sources as provided\nby CCD imaging detectors. \nThese profiles are unfortunately affected by the source continuum\nand tend to overestimate the line spatial broadening with a consequent\nshrinking of the residual velocity broadening. \n\\citet{Sanders2013} addressed this point in their Sect.\\,2.2 on A\\,3112\nwhere they decreased these systematic effects by using \\textit{Chandra} continuum-subtracted\nline spatial profiles. We have tested this method by using the MOS\\,1 observations\nthat were taken simultaneously with the RGS spectra (see Fig.\\,\\ref{fig:mos1}). \nXMM-\\textit{Newton} CCDs have a spatial resolution lower than \\textit{Chandra} CCDs, \nwhich increase the systematic effects in the creation of continuum-subtracted maps.\nDeep \\textit{Chandra} observations, enabling an accurate \nsubtraction of different energy bands, are missing for most sources.\nWe have therefore tried to use the MOS\\,1 integral maps\nand to fit the contribution of the spatial broadening\nas an alternative method.\n\n\n\\subsection{Temperature dependence of the upper limits}\n\\label{sec:temperature}\n\n{So far we adopted the same velocity broadening for all the emission lines.\nFor most sources it is possible to measure the velocity broadening of the \n\\ion{O}{viii} and \\ion{Fe}{xx-to-xxiv} emission lines, which are mainly produced by hot gas.\nOnly a few sources have high-statistics \\ion{Fe}{xvii} lines produced by cool ($T<1$\\,keV) gas.\nSix objects exhibit both strong low- and high-ionization lines \nand allow to fit the velocity broadening of the two \\textit{cie} components, separately,\nin the full-band spectral fits.\n17 sources allow to measure 90\\% upper limits on turbulence\nfor the \\ion{O}{viii}, \\ion{Fe}{xvii}, and \\ion{Fe}{xx} lines, \nby fitting the $18.0-23.0$\\,{\\AA}, $14.0-18.0$\\,{\\AA}, and $10.0-14.3$\\,{\\AA} \nrest-frame wavelength ranges, respectively. \nFor each local fit we adopt an isothermal model and correct for spatial broadening\nby using additional surface-brightness profiles calculated through MOS\\,1 images\nextracted in the same rest-frame wavelength ranges using the same method\nshown in Sect.\\,\\ref{sec:spatial_profile}. These profiles are still affected by the \ncontinuum but provide a better description of the spatial broadening in each line.\nIn Fig.\\,\\ref{fig:turb_vs_band} we compare the \\ion{O}{viii} velocity limits\nwith those measured for the \\ion{Fe}{xvii} and \\ion{Fe}{xx} line systems.\nThe high-ionization Fe lines clearly show higher upper limits,\nwhich is confirmed by the results of the full-band fits: \nthe hotter (T1) CIE component allows for higher values of velocity broadening.\nThe hotter gas is distributed over a larger extent than that of the cold (T2) gas \nand has larger spatial broadening,\nwhich affects the T1-T2 results shown in this plot. The \\ion{O}{viii}, \n\\ion{Fe}{xvii}, and \\ion{Fe}{xx} lines were fitted by subtracting the spatial broadening \nextracted exactly in their energy band, which should partly correct this systematic effect,\nbut it is difficult to estimate the systematic uncertainties\ndue to the low spatial (and spectral) resolution of the CCD data.\nOn some extent, the hotter phase may still have larger turbulence. \nFor clarity, we also tabulate these line-band fits in Table\\,\\ref{table:physical_properties}.\nWe also note that the the velocity limits of low- and high-ionization iron lines\nfall at opposite sides of the Fe--\\ion{O}{viii} 1:1 line, which means that the metallicity \ndistribution in the sources should not affect our broad-band, multi-ion, limits \nshown in Fig.\\,\\ref{fig:turbulence2}.}\n\n\\begin{figure}\n \\subfigure{ \n \\includegraphics[bb=58 54 540 730, width=6.5cm, angle=+90]{IDL_line_spatial_comparison_bands.ps}}\n \\caption{{90\\% upper limits on velocity broadening obtained in the 0.8' region \n for the \\ion{O}{viii} lines compared with those measured for high-ionization \\ion{Fe}{xx} \n (open red triangles) and for the low-ionization \\ion{Fe}{xvii} (filled black triangles) line systems.\n For six sources we could also measure the limits for the hot (open green circles) \n and the cool (open magenta boxes) CIE components {(see Sect.\\,\\ref{sec:temperature})}.\n \\label{fig:turb_vs_band}}}\n\\end{figure}\n\n\n\\subsection{Turbulence}\n\\label{sec:turbulence}\n\nIn Fig.\\,\\ref{fig:turbulence2} we show the velocity broadening \nof the RGS spectra extracted in the 0.8' core region. \nWe find upper limits to the velocity broadening with the possible exception of NGC\\,507. \nThey generally range between 200 and 600\\,km\\,s$^{-1}$. \nFor several objects like A\\,85, A\\,133, M\\,49, and most NGC elliptical\nwe found velocity levels below 500\\,km\\,s$^{-1}$, which would\nsuggest low turbulence.\nThe broader 3.4' region is more affected by spatial broadening \nas shown by the higher upper limits, but non-detection, of\nA\\,4038, AWM\\,7, and M\\,87 (see Table\\,\\ref{table:velocity_results}\nand Fig.\\,\\ref{fig:velocities_comparison}). \nFor these sources it is difficult to constrain the velocity broadening \nbecause their large extent smears out the emission lines.\n\nTo understand how much energy can be stored in turbulence, we \ncompare our upper limits with the sound speeds and the temperatures of \ndominant \\textit{cie} component in these objects.\nThe sound speed is given by $c_S = \\sqrt{\\gamma \\, k \\, T \/ \\mu m_{\\rm p}}$,\nwhere $\\gamma$ is the adiabatic index, which is 5\/3 for ideal monoatomic gas,\n$T$ is the RGS temperature, $\\mu=0.6$ is the mean particle mass, and $m_p$ is proton mass.\nThe ratio between turbulent and thermal energy is \n$\\varepsilon_{\\rm turb}\/\\varepsilon_{\\rm therm}=\\gamma\/2\\,M^2$,\nwhere $M=v_{\\rm turb}\/c_S$ is the Mach number (see also \\citealt{Werner2009}).\nIn Fig.\\,\\ref{fig:velocities_comparison} we compare our $2\\sigma$ upper limits \non the velocities in the central 0.8' region\nwith the sound speed and some fractions of turbulent energy.\nWe also show the more conservative velocity upper limits\nthat were measured with a variable spatial broadening. \nAt least for half the sample, our 90\\% upper limits are below the sound speed in the system.\nIn about ten objects the turbulence contains less than the 40\\%\nof the thermal energy. This is similar to the previous results\nof \\cite{Sanders2013}. Apparently, the hotter objects allow for higher velocities.\n\n{We note that the spectral extraction region had a fixed angle, and, therefore,\nthe actual physical scale -- where we estimated the velocity broadening -- depends on the source distance.\nIn Fig.\\,\\ref{fig:turb_vs_temp_scaled} (\\textit{left panel}), we show the Mach numbers\nfor the 90\\% conservative upper limits as a function of the temperature, and we compare\nthe average upper limits on Mach number calculated within different ranges of physical scales.\nThere is no significant trend with the temperature,\nbut the average upper limit on the Mach number is lower for narrower physical scales.\nAssuming a Kolmogorov spectrum for the turbulence in these objects, \nthe root-mean-square velocity scale depends on the 1\/3rd power of the physical length.\nTherefore, we scaled the upper limits by ${(sc\/sc_{\\rm min})}^{1\/3}$, \nwhere $sc_{\\rm min}$ is the \nminimum physical scale per arcsec $\\sim0.07$\\,kpc\/1\" of NGC\\,4636,\nthe nearest object in our sample.\nIn other words, we divided our upper limits by the relative physical scale per arcsec \nrelative to NGC\\,4636, which is equivalent to normalizing by the ratio\nbetween the size of the spectral extraction region of each cluster\nand that one of NGC\\,4636.\nThe scaled upper limits on the Mach numbers are tabulated \nin Table\\,\\ref{table:physical_properties} and plotted\nin Fig.\\,\\ref{fig:turb_vs_temp_scaled} (\\textit{right panel}).\nThey are randomly distributed around $Ma\\sim0.8$ and \ndo not depend any more on the physical scale. \nWe coded the point-size and the colors with the values of $r_{500}$ and $K_0$ taken from \nthe literature. The $r_{500}$ is the radius within which the mean over-density \nof the cluster is 500 times the critical density at the cluster redshift,\nand $K_0$ is the value of the central entropy in the same cluster.\nAll the adopted values and their references are reported in Table\\,\\ref{table:physical_properties}.\nWe do not find any significant relation between the upper limits on the Mach number and \nthese physical properties, possibly due to the limited sample.}\n\n{To understand whether dissipation of turbulence may prevent cooling\nin our sample, we computed the Mach number that is required to balance \nthe heating and cooling, according to the following equation:}\n\\begin{equation}\\label{eq:mach}\nMa_{REQ} \\approx 0.15 \\left( \\frac{n_e}{10^{-2} \\, {\\rm cm}^{-3}} \\right)^{1\/3} \n \\, \\left( \\frac{c_s}{10^3 \\, {\\rm km\\,s}^{-1}} \\right)^{-1} \n \\, \\left( \\frac{l}{10 \\, {\\rm kpc}} \\right)^{1\/3}\n\\end{equation}\n{where $n_e$ is the density at the cavity location, $c_s$ the sound speed\nthat we have estimated through the RGS temperature, \nand $l$ the characteristic eddy size, which we take as the\naverage cavity size (see \\citealt{Zhuravleva2014}).\nThe Mach numbers required to balance cooling are tabulated \nin Table\\,\\ref{table:physical_properties}.\nMost cavity sizes were taken from \\citet{Panagoulia2014b}.\nFor clusters with multiple cavities, we used an average size.\nFor the 19 sources outside of their sample, we used their $r-T$ relation \nto determine the cavity size. Most densities were taken from the ACCEPT\ncatalog.\n}\n\n{In Fig.\\,\\ref{fig:Mach_vs_temp_scaled}, we compare the ratios between \nthe conservative upper limits of the scaled Mach numbers assuming Kolmogorov turbulence,\nand those that are required to balance cooling with the RGS temperatures.\nFor most sources, our upper limits are larger than the balanced Mach numbers,\nwhich means that dissipation of turbulence can provide enough heat\nto prevent the cooling of the gas in the cores.}\n\n{It is difficult to know which is the main mechanism that produces turbulence\nin these objects. Our scaled upper limits are mostly below 500\\,km\\,s$^{-1}$,\nwhich can be produced by bubbles inflated by past AGN activity (see, e.g., \\citealt{Bruggen2005}).\nFor some objects, our upper limits are consistent with velocities up to 1000\\,km\\,s$^{-1}$, \nwhich would correspond to Mach numbers larger than one.\nFor NGC\\,507, we detect transonic motions presumably due to merging\n(see, e.g., \\citealt{Ascasibar2006}).\nIn a forthcoming paper, we will analyze the resonant scattering of the \\ion{Fe}{xvii} lines\nexhibited by half of our sample to place lower limits on turbulent broadening \nand provide more insights on its origin and its role in preventing cooling.}\n\n\\begin{figure}\n \\subfigure{ \n \\includegraphics[bb=65 85 525 686, width=6.8cm, angle=+90]{IDL_sound_speed_combined_referee_Mach_needed.ps}}\n \\vspace{-0.5cm}\n \\caption{{Ratios between the 90\\% conservative upper limits on the Mach number (velocity \/ sound speed)\n that are scaled by the 1\/3rd power of the spatial scalel assuming Kolmogorov turbulence\n (see {Sect.\\,\\ref{sec:turbulence}}), and the Mach number, which is required\n to make a heating--cooling balance (see Eq.\\,\\ref{eq:mach}).\n The point size provides the $r_{500}$, and the color is coded according to the central entropy,\n $K_0$, in units of keV cm$^{2}$.\n \\label{fig:Mach_vs_temp_scaled}}}\n\\end{figure}\n\n\n\\subsection{Comparison with previous results}\n\\label{sec:comparison}\n\nOur velocity limits broadly agree with the previous results obtained by \\citet{Sanders2013} \nusing a similar method and by other authors, who use the measurements\nof resonant scattering (\\citealt{Werner2009} and \\citealt{dePlaa2012}).\nIn particular, our limits for M\\,49 (also known as \\object{NGC\\,4472}), \nNGC\\,4636, and NGC\\,5813 \nagree with the $100$\\,km\\,s$^{-1}$ upper limit obtained by \\citet{Werner2009}.\nWe also found upper limits of a few $100$s\\,km\\,s$^{-1}$ for A\\,3112, which is similar\nto the results of \\citet{Bulbul2012}.\nHowever, we measured higher limits with a variable spatial broadening\nthat agree with continuum-subtracted profiles method of \\citet{Sanders2013}.\n\nRecently, \\citet{Zhuravleva2014} used the surface brightness fluctuations \nin the \\textit{Chandra} images of the Perseus and Virgo clusters to derive turbulent \nvelocities in the range 70--210\\,km\\,s$^{-1}$ for Perseus and 43--140\\,km\\,s$^{-1}$ for Virgo,\n{where the smaller values refer to the central 1.5' region.}\n{Our upper limits in the cores of the clusters are consistent with their values,\nespecially when normalized by the physical scale factor $1.5'\/0.4'$.}\nThey show that these turbulent motions should dissipate enough energy to \noffset the cooling of the central ICM in these clusters.\n{For ten objects, the scaled Mach number can be transonic, and a major fraction of energy\ncan be stored in turbulence, which could significantly heat the gas through dissipation \n(see, e.g., \\citealt{Ruszkowski2004}).\nRecently, \\citet{Gaspari2014} noted that even if the turbulence \nin the hot gas is subsonic, it may be transonic in the cooler gas phases.\n\\citet{Zhuravleva2014} reported that dissipation of turbulence may balance cooling\neven under subsonic regime.\nOur upper limits on Mach number are larger than the values necessary to balance cooling\nand are consistent with this scenario.\nHowever, it is possible that other processes are dominant, \nsuch as turbulent mixing (see, e.g., \\citealt{Banerjee2014}).}\n\nThe NGC\\,507 group exhibits velocities larger than $1000$\\,km\\,s$^{-1}$ in both the 0.8' and 3.4' regions,\n{corresponding to a scaled Mach number $Ma=4.2\\pm1.7$ (1$\\sigma$)}. \nThe 15\\,{\\AA} \\ion{Fe}{xvii} line is stronger than the one at 17\\,{\\AA}, which\nwould suggest low resonant scattering (see Fig.\\,\\ref{fig:rgs_fits3})\nand, therefore, high kinematics in the galaxy group.\nThis object is known to have a disturbed shape and to host radio lobes presumably\nin a transonic expansion\/inflation (\\citealt{Kraft2004}). However, our high values\nsuggest the presence of bulk motions. \n{In Fig.\\,\\ref{fig:NGC507}, we show the velocities of the galaxies in the NGC\\,507 group\nas taken from \\cite{Zhang2011}. They are not necessary\nlinked to that of the ICM, but there are high kinematics and hints \nof infalling clumps, which indicate a substructure extended toward the observer.\nIn this group, the galaxy velocities generally double those observed in NGC\\,4636,\nwhere we measure lower velocity broadening (see also the different line widths in Fig.\\,\\ref{fig:rgs_fits3}).}\n\n\\begin{figure}\n \\begin{center}\n \\subfigure{ \n \\includegraphics[bb=50 115 535 590, width=7.5cm, angle=-90]{vlos_dist_leallr500.ps}}\n \\vspace{-0.4cm}\n \\caption{Line-of-sight velocity versus projected distance from the central cD galaxy \n for the member galaxies of NGC\\,507 group.\n Optical spectroscopic redshifts are taken from \\cite{Zhang2011}.}\n \\label{fig:NGC507}\n \\end{center}\n\\end{figure}\n\n\\subsection{Toward ASTRO-H}\n\\label{sec:simulations}\n\nThe RGS gratings aboard XMM-\\textit{Newton} are currently the only instruments that can\nmeasure $100$s\\,km\\,s$^{-1}$ velocities in X-ray spectra of extended sources like clusters of galaxies. \nHowever, they are slitless spectrometers and, therefore, affected by spatial broadening.\nWe have partly solved this issue by using line surface brightness profiles, \nbut there are still systematic uncertainties larger than 100\\,km\\,s$^{-1}$.\nOur models provide an important workbench once the new ASTRO-H\nX-ray satellite (\\citealt{Takahashi2010}) is launched. The spectra, as provided by its microcalorimeter (SXS),\ndo not suffer from spatial broadening as for the RGS and will revolutionize the method. \nMoreover, its constant spectral resolution\nin terms of energy increases the sensitivity at high energies, which allows us \nto use higher-ionization lines up to 6-7\\,keV (Fe-K line complex) \nnecessary to constrain the turbulence in hotter gas phases. \nThe position of the lines unveil evidence of bulk motions.\n\nIn Fig.\\,\\ref{fig:simulations} (\\textit{\\textit{left panel}}), we compare the effective area of the ASTRO-H SXS\nwith that of the first order RGS 1 and 2. ASTRO-H provides clearly better results than the sum of RGS\\,1 and 2 below 14\\,{\\AA} \n(above 1\\,keV). The RGS has still a better spectral resolution than the SXS \nin the wavelength range that includes the \\ion{Fe}{xvii} lines of the cool gas,\nbut the absence of spurious line-broadening in the SXS makes it a great alternative tool.\nWe have simulated a 100\\,ks exposure with the ASTRO-H SXS for four interesting objects\nin our catalog: Perseus (500\\,km\\,s$^{-1}$), NGC\\,5846 (10\\,km\\,s$^{-1}$), NGC\\,4636 (100\\,km\\,s$^{-1}$), \nand NGC\\,507 (1000\\,km\\,s$^{-1}$, see Fig.\\,\\ref{fig:simulations} \\textit{right panel}). \nWe have used the model fitted for the full (-1.7',+1.7') RGS spectra as a template, which are shown\nin Fig.\\,\\ref{fig:rgs_fits3}, because this extraction region is \ncomparable to the 3.05'\\,$\\times$\\,3.05' field-of-view of the microcalorimeter.\nThe spatial broadening was excluded from the model. \nThe simulated SXS spectra are characterized by a richness of resolved emission lines, which provides\nvelocity measurements with an accuracy of 50\\,km\\,s$^{-1}$ or better.\nThe line widths clearly increase throughout NGC\\,5846, NGC\\,4636, and NGC\\,507.\nThe hotter gas present in the Perseus cluster produces strong higher-ionization lines\nabove 1\\,keV, which constrain the turbulence in different (Fe-L and Fe-K) gas phases. \n\nWe also note that the 1' spatial resolution of ASTRO-H provides,\nfor the first time the means for a spatially-resolved high-resolution spectral analysis\nand the measurements of turbulence in different regions of the clusters.\nThe ATHENA X-ray observatory that is to be launched by the late 2020s will further \nrevolutionize our measurements due to its combined high spectral (2.5\\,eV) and spatial ($<5$'') resolution.\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe have presented a set of upper limits and measurements of the velocity widths \nfor the soft X-ray emitting gas of a sample of clusters, groups of galaxies, \nand elliptical galaxies included in the CHEERS project. \nWe have subtracted the instrumental spatial broadening \nthrough the use of surface brightness profiles\nextracted in the MOS\\,1 images. \n\nFor most sources, we obtain upper limits ranging within 200-600\\,km\\,s$^{-1}$, \n{where the turbulence may originate in AGN feedback or sloshing of the ICM.\nHowever, for some sources, such as NGC\\,507, we find upper limits of 1000\\,km\\,s$^{-1}$ \nor larger, suggesting other origins, such as mergers and bulk motions.\nThe measurements depend on the angular scale and the temperature.\nFor a small sample producing strong high- and low-ionization\nlines, we measured significantly broader upper limits for the hot gas phase,\nwhich may be partly due to its larger spatial extent as compared\nto the cool phase.\nWhen we normalize the Mach numbers for the physical scale, assuming \nKolmogorov turbulence, we constrain upper limits ranging within\n$0.3 0$), the robot must choose any potential solution to be evaluated in the real world that is predicted to keep $\\epsilon(s) > 0$, i.e. does not enter any unsafe state.\nTo lower the risk of damage to the robot, an offset $\\beta$ can be added in the computation of $\\epsilon_s$ as an increased threshold for the minimum distance towards the border of the region of unsafe states within the state space.\n\n\\begin{equation}\n\\label{eqn:epsilon_general}\n \\epsilon(s) = \\frac{dist(s, \\Omega) - \\beta}\n {\\underset{s_i}{\\max} \\, dist(s_i, \\Omega) - \\beta}\n\\end{equation}\n\nFrom the dynamics model, we can obtain the predicted next state $s'$ after the execution of a candidate behaviour and compute $\\epsilon(s')$. $s'$ corresponds to the state $s_{t+T}$ after $T$ timesteps, where $T$ is the length of one behaviour.\nGenerally, we seek the robot to stay as close as possible to the safest point(s) in the environment, i.e. maintain maximal distance to the region of dangerous states ($\\epsilon(s) \\approx 1$).\n\n\\subsection{Safety Constraints}\nFor every behaviour selection performed by our policy, we first employ a safety constraint to determine the safe subset $\\mathcal{C}_{safe}$ of all available candidate behaviours with respect to the robot's current state. We can use different constraints depending on our knowledge of the environment and the intended risk aversion of our exploration. In the experiment section below, we evaluate the following constraints, all of which are based on the predicted robot state $s'$ after the execution of each imagined behaviour (given the current robot state $s$):\n\\begin{itemize}[leftmargin=*]\n\\setlength\\itemsep{1em}\n \\item As a \\textit{minimal constraint} we consider only candidate behaviours with $\\epsilon(s') > 0$ to ensure we never execute a behaviour that was already expected to be unsafe. \n\n \\item Alternatively, a \\textit{contextual constraint} carries weight only if the current robot state is near the border of the region of unsafe states ($\\epsilon(s) \\approx 0$), but enables free exploration if it is far away from potential danger ($\\epsilon(s) \\approx 1$):\n \\begin{equation}\n \\label{eqn:constraint_minimal}\n \\epsilon(s') > \\epsilon(s) \\cdot (1-\\epsilon(s))\n \\end{equation}\n \n \\item If we have access to the gradient of the epsilon function, the direction of maximal improvement of safety with respect to the next state can be computed as $\\nabla_s \\epsilon(s)$. The \\textit{gradient-minimal constraint} considers only solutions moving in the general direction of the gradient. Based on the dot product of the unit vectors of the gradient of the epsilon function ($\\nabla_s \\epsilon(s)$) and the projected movement in state space ($s'-s$), we formulate a lower bound for deviation from the direction of the gradient as:\n \\begin{equation}\n \\label{eqn:constraint_gradient_minimal}\n \\frac{s'-s}{||s'-s||}\\cdot\\frac{\\nabla_s \\epsilon(s)}{||\\nabla_s \\epsilon(s)||} \\geq 0\n \\end{equation}\n Geometrically, this is equal to a maximum deviation of 90\u00b0 in 2D space as visualised in Figure \\ref{fig:safety_constraint} (green semicircle).\n \n \\item Again, we can modify this into a more strict \\textit{gradient-contextual constraint} by using the value of epsilon at the current state of the robot to modulate the constraint. This way, the constraint is more relaxed towards the centre of the region of safe states but only accepts small deviations from the direction of the safety gradient close to the border of the region of unsafe states:\n \\begin{equation}\n \\label{eqn:constraint_gradient_contextual}\n \\frac{s'-s}{||s'-s||}\\cdot\\frac{\\nabla_s \\epsilon(s)}{||\\nabla_s \\epsilon(s)||}\n \\geq \\epsilon(s) \\cdot (1-\\epsilon(s))\n \\end{equation}\n Geometrically, this is equal to a deviation from the gradient proportional to $\\epsilon(s)$ (see yellow region in Figure \\ref{fig:safety_constraint} for $\\epsilon(s)=0.5$).\n \n \\item Finally, safety can also be enforced not by a hard constraint, but as a component of the prioritization measures. This can especially be useful as a supplement to the gradient-free constraints in complex environments.\n\\end{itemize}\n\n\\begin{figure}\n\\centering\n \\includegraphics[height=0.15\\textwidth]{figures\/constraint_grad_context_legend.png}\n \\caption{Sketch of the gradient-based safety constraints in a simple circular 2D-environment.}\n \\label{fig:safety_constraint}\n \\vspace{-4mm}\n\\end{figure}\n\n\\subsection{Prioritization Metrics}\nAfter the safe subset of candidate behaviours $\\mathcal{C}_{safe}$ has been selected based on the safety constraint, the remaining candidates are ranked according to a prioritization measure as the second step in behaviour selection. This is intended to give priority to the real-world evaluation of candidate behaviours which have the highest value for the overall QD algorithm performance, as real-world samples are expensive to collect. \nFinally, the candidate with the highest prioritization score is selected. The composition of prioritization measures can be adapted depending on the task at hand. We can either use a single prioritization measure or a (weighed) sum of multiple values. In this work, we have evaluated the following measures:\n\\begin{itemize}[leftmargin=*]\n\\setlength\\itemsep{1em}\n \\item Firstly, the robot's \\textbf{safety} can be considered again as a prioritization measure through the dynamic exploration parameter $\\epsilon(s')$ as outlined above. Generally, this approach will be used in combination with another metric to enable the behaviour policy to tolerate a possible safety violation in favor of a higher score.\n \n \\item Another key measure to score a candidate behaviour is the \\textbf{dynamics model disagreement}.\n The dynamics model used in DA-QD consists of an ensemble of models to capture the epistemic uncertainty via disagreement between the predictions of the models (see Section \\ref{sec:background} and Equation \\ref{eqn:disagreement}). \n The epistemic uncertainty can also be interpreted and formalised as an information theoretic measure of the expected information gain \\cite{pathak2019self, sekar2020planning}. \n Maximising the model-disagreement has been used as a self-supervised intrinsic reward for exploration in Deep RL literature \\cite{pathak2019self, sekar2020planning}. \n The key idea behind this measure is to prioritise policies that are most informative based on our current knowledge which is represented via the ensemble of dynamics models (i.e. epistemic uncertainty).\n Selecting policies with high-model disagreement would mean visiting states that have been less explored than others.\n As we incrementally train the dynamics model on incoming data, policies that visit states that have been seen will no longer have a large model disagreement which will allow this measure to continuously be used to explore.\n Depending on the state of the robot in the environment, we can prioritize high and low model disagreement behaviours. \n \n Conversely, policies with low disagreement should be prioritized in safety-critical situations. Solutions with low expected model disagreement are likely to resemble the expected outcome and indicates the model's confidence.\n \n \n \\item Finally, we also consider the classical metrics used to quantify behaviours in QD. This is firstly the \\textbf{novelty} of a candidate behaviour as the distance to the k nearest solutions already in the archive ($\\nu_1, ..., \\nu_k$)~\\cite{lehman2011abandoning}. Similarly, we could also consider the quality of a solution through a measure such as the QD improvement~\\cite{fontaine2020covariance} or the future value of a solution through its curiosity score~\\cite{cully2017quality}. However, this is left for future work.\n\\end{itemize}\n\n\\subsection{Recovery Policy}\nAs a final safeguard to keep the robot in the safe region of the environment, we introduce a recovery policy to return the robot to safety if it ever violates any of the environment's safety constraints. These constraints can be derived from the environment in various ways, e.g. as a minimum distance to obstacles represented by 'safety regions' as in this work. Should the robot leave the safe region, the discovery of new behaviours will be halted and a greedy behaviour selection policy will be employed over the archive of behaviours that were already evaluated in the environment instead of the buffer of candidate behaviours. Here, we pick the single behaviour that is projected to effect the greatest improvement in safety.\n\n\n\n\n\n\n\\section{Experiments}\nWe evaluate our method with an 18 DoF hexapod robot on an adapted version of the omni-directional locomotion task~\\cite{cully2013behavioral}.\nIn this task, the robot learns behaviours to walk in every direction from an initial position.\nFor the controllers, we evolve parameters of a sinusoidal control signal that is sent to each motor. This sinusoidal signal acts as a structural prior towards periodic movement for locomotion.\nAs we focus on a reset-free setting, all evaluations of new behaviours have to be done sequentially and cannot be parallelised.\nAll simulations are performed in RobotDART building on the Dynamics Animation and Robotics Tookit (DART) simulator \\cite{dartsim}. \nTo simulate a practical number of trials that would be performed in the real-world experiment, the number of evaluations performed in any single run of the algorithm are limited to 10,000.\n\n\\subsection{Baseline comparison}\n\\label{sec:results_baselines}\nFirstly, we evaluate the general capability of the RF-QD method.\nFor this, we compare against \"vanilla\" QD and DA-QD \\cite{lim2021dynamics} as baselines. RF-QD and both baselines use the Iso-dd~\\cite{vassiliades2018discovering} variation operator.\nWe use a simple flat environment with a circular region of safety with radius $r=2.0m$.\nFigure \\ref{fig:trajectory} shows example trajectories of the baselines compared to RF-QD. The baselines' random selection of behaviours causes the robot to trail off deeply into the dangerous region, while RF-QD performs its exploration almost entirely within the safe region. The depicted RF-QD run leaves the safe region once, but then deploys the recovery policy (blue line) to return to safety.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/figure4-rfqd.png}\n \\caption{Example trajectories of DA-QD, Vanilla-QD and RF-QD in flat environment with safe region (green) and dangerous region (red).}\n \\label{fig:trajectory}\n \\vspace{-4mm}\n\\end{figure}\n\nAs the baseline methods are not made for a reset-free environment, for all further comparisons we perform manual resets to the starting position if the robot leaves the safe region by more than 50 cm. This is similar to what is done when performing QD on a real-world robot today. For the baseline comparisons, RF-QD was run with a gradient-contextual safety constraint and encouraging maximal novelty through the prioritization strategy. This configuration has proven powerful in our evaluation of different constraints and prioritization measures. Table \\ref{tab:baselines_safety} quantifies the safety of the three algorithms averaged over 10 replications of each. We can see, that RF-QD achieves almost perfect safety - never once requiring a safety reset as described above and only rarely taking a single step outside the safe region.\n\n\\begin{table}\n \\caption{Safety metrics for all variants, averaged over 10 runs (mean \u00b1 std).}\n \\label{tab:baselines_safety}\n \\begin{tabular}{c|ccc}\n \\toprule\n Variant &Resets &Steps outside safety &Recovery steps\\\\\n \\midrule\n Vanilla-QD & 54.0 \u00b1 4.2 & 908.0 \u00b1 74.1 & n\/a \\\\\n DA-QD & 114.0 \u00b1 17.8 & 1039.5 \u00b1 51.0 & n\/a \\\\\n RF-QD & 0.0 \u00b1 0.0 & 1.0 \u00b1 2.8 & 3.5 \u00b1 9.9 \\\\\n \\bottomrule\n\\end{tabular}\n\\end{table}\n\nAdditionally, RF-QD slightly outperforms its direct baseline DA-QD in terms of both QD-score and coverage as shown in Figure \\ref{fig:baselines_performance}. While the distance to vanilla QD is due to DA-QD's increased sample efficiency, RF-QD's behaviour selection policy does not sacrifice performance for safety, but even improves performance by its candidate prioritization strategy (i.e. novelty in this case).\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/figure5-rfqd.png}\n \\caption{QD-Score and coverage of RF-QD and baselines on the circular safe area environment. The graphs represent the median as a coloured bold line, while the shaded area extends to the first and the third quartiles over 10 runs.}\n \\label{fig:baselines_performance}\n \\vspace{-4mm}\n\\end{figure}\n\n\n\\subsection{Comparison of Policy Configurations}\n\\label{sec:hyperparameters}\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.5\\textwidth]{figures\/figure6-rfqd.png}\n \\caption{Comparison of different Behaviour Selection Policy configurations on both performance (coverage) and safety (recovery steps) on the circular safe area environment.}\n \\label{fig:policy_comparison}\n \\vspace{-4mm}\n\\end{figure}\n\nAdditionally, we evaluated the various configurations of the Behaviour Selection Policy as introduced in Section \\ref{subsec:methods-behaviour-selection}. Figure \\ref{fig:policy_comparison} shows an overview over the different combinations of safety constraints and prioritization measures. Here, the policy configurations are evaluated by performance (represented by their final coverage) and safety (represented by the number of recovery steps), both from runs of 10,000 steps over 10 replications. In short, Figure \\ref{fig:policy_comparison} shows strong separation between the relatively unsafe minimal and contextual constraints (both gradient-free) and all remaining constraints. The strongest performance is exhibited by variants combining the novelty or disagreement maximising prioritization measures with a gradient contextual constraint. Out of the naive gradient-free constraints, which must be used if there is no single 'safest' direction of movement (as e.g. in more complex environments such as the one following in Section \\ref{sec:results_complex}), only the soft constraints achieves comparable safety scores and performances as the gradient-based configurations. Which exact configuration should be chosen will however always depend on the exact task at hand.\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=1\\textwidth]{figures\/figure7-rfqd.png}\n \\caption{Complex environments with 0, 5, 10 and 15 obstacles. Top: Example trajectories of hexapod acting under RF-QD. Middle: Example archives by RF-QD. Bottom: Example archives by DA-QD.}\n \\label{fig:environments}\n \\vspace{-2mm}\n\\end{figure*}\n\n\\subsection{Robustness to environment complexity}\n\\label{sec:results_complex}\nTo evaluate RF-QD's performance in increasingly complex environments, we exchange the previous circular environment for a closed 4x4m room with a number of column-shaped obstacles. Figure \\ref{fig:environments} shows examples of such environments including RF-QD's trajectories in them (top row). \nWe can observe that the robot acting under RF-QD keeps its distance from the obstacles, while building archives of behaviours (middle row) that are radically less affected by the environment complexity than those created by DA-QD (bottom row).\n\nIn these complex environments, we employed RF-QD with a safety-focused configuration. This uses a minimal (hard) safety constraint combined with two equally weighed prioritization measures to select behaviours that maximise safety (through $\\epsilon$) and have low model disagreement. \nAs a benchmark for QD performance, we again add a version of DA-QD that uses safety resets, now triggered on any collision with an obstacle. \nWe also keep a 'naive' version of DA-QD, that is not reset upon collision (same as RF-QD). These algorithms were compared in rooms with 0 to 15 obstacles (see Figure \\ref{fig:advanced_qd_scores}). \nWhile in an empty room, all algorithms perform similarly well, the naive DA-QD variant quickly drops in performance with a growing number of obstacles through a large number of collisions (which render the corresponding evaluations invalid). \nAt the same time, RF-QD manages to fully keep up with the upper baseline of DA-QD (using safety resets). \nWhile a more performance-focused prioritization strategy (i.e. novelty as in Section \\ref{sec:results_baselines}) for RF-QD might have increased QD-scores slightly, this would have sacrificed the safety of the robot in more challenging environments.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.5\\textwidth]{figures\/figure8-rfqd.png}\n \\caption{Increasingly complex environments: QD-Scores vs number of obstacles. The graphs represent the mean as a coloured bold line, while the shaded area extends to the standard deviations over 10 runs for each environment.}\n \\label{fig:advanced_qd_scores}\n \\vspace{-4mm}\n\\end{figure}\n\n\n\\subsection{Effect of objectives in imagination} \\label{sec:results_emitters}\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.5\\textwidth]{figures\/rfqd-figure9.pdf}\n \\caption{Study of different optimization objectives and prioritization metric configurations. Each panel considers a different prioritisation metric. Top: Disagreement of selected behaviours by RF-QD. The bold lines and shaded areas represent the median and interquartile range over 10 replications respectively. Middle: Progression of the archive size over the number of selected behaviours for each optimization objective. Bottom: Distribution of the total number of recovery steps for each optimization objective.}\n \\label{fig:emitter_results}\n \\vspace{-4mm}\n\\end{figure}\nWe also study the effect of the type of solutions available in the candidate buffer that the behaviour selection policy chooses from.\nTo study this, we investigate the influence of different optimisation objectives for the generation of the candidate buffer during the QD in imagination. \nWhen using Iso-DD \\cite{vassiliades2018discovering}, the solutions are relatively generic and objective-agnostic, i.e., not optimised to fulfil a specific objective. \nAlternatively, we can use different types of emitters (introduced by CMA-ME \\cite{fontaine2020covariance}) to produce solutions that maximise a specific objective.\nWe perform experiments using three different optimization objectives: maximising model disagreement, minimising model disagreement, and a random direction objective as a surrogate objective for novelty. We compare this to the standard Iso-dd variations used in all our experiments as a baseline.\nWe perform an ablation of these three different objectives with their corresponding prioritization measures used in the behaviour selection policy. We report results across 10 replications.\n\nFirst, we evaluate the effect of more targeted objectives by analysing the model disagreement associated with the individuals selected by the behaviour selection policy (Figure \\ref{fig:emitter_results}). \nThe key take-away from Figure \\ref{fig:emitter_results} (top) is that the optimisation objectives used when running QD in imagination can strongly influence the behaviours that are finally selected. \nWe can see that regardless of the prioritization metric used by the behaviour selection policy, the same overall trends are always observed:\nThe minimising disagreement optimization objective (yellow) always results in low disagreement individuals being selected by the behaviour selection policy regardless of the prioritization metrics.\nThe same observation applied to the maximising disagreement objective (green).\nThis observation corresponds to our initial hypothesis where targeted optimization objectives can skew the distribution of solutions generated towards the target objective.\nThis results in a higher probability for the solutions with the desired metric being selected.\n\nGiven that biased\/specialised sets of solutions can be generated in the candidate buffer using more targeted objectives, we evaluate the effect of the composition of this candidate buffer on the performance of RF-QD. \nFigure \\ref{fig:emitter_results} (middle and bottom) show that the objective-agnostic Iso-DD operator outperforms all the targeted optimization objectives both in terms of coverage and safety (number of resets) across all prioritization measures used by the behaviour selection policy.\nThis is an interesting result as one could expect the variants with aligned prioritization measures and optimization objectives to perform better.\nWe hypothesize that the buffer of candidate solutions being generated by targeted objectives become too specialised while the objective-agnostic Iso-DD can generate a diverse buffer of solutions to choose from.\nThis is not such a surprising observation as Multi-Emitter MAP-Elites \\cite{cully2020multi} had previously also shown that when using simultaneously multiple emitter types, the random emitter (based on Iso-dd) remains the most fruitful through the entire process compared to other objective-driven emitters.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion} \\label{label:discussion}\nIn this paper, we have presented RF-QD, a method to learn behavioural repertoires autonomously without resets in realistic environments. We demonstrate how an intelligent behaviour selection policy can be used with QD in imagination to learn safely and efficiently. We first test RF-QD to learn while remaining within a designated area and show that the behaviour selection policy is necessary to prevent the need for resets and to stay within the safe training area. \nWe then show how RF-QD can also operate in more complex environments with many obstacles and minimal room for error.\nOur results also show that we can acquire full repertoires despite increasing environment complexity while the performance of DA-QD and Vanilla QD baselines deteriorate with the increase in complexity. \nLastly, we conduct an ablation to investigate the effect of the type of solutions present in the candidate buffer on the performance of RF-QD.\nWe demonstrate that using targeted optimization objectives when performing QD in imagination can bias the distribution of solutions presented to the behaviour selection policy. \nOur results show that it is important to keep the diverse types of solutions in the candidate buffer over just specialised solutions biased towards a single metric.\n\nFor future work, we also hope to show RF-QD learning directly on a real world system, with no dependence on simulators. Additionally, this paper only considers safety and danger in the form of obstacle avoidance. We leave other forms dangerous scenarios and work on safety detection for future work.\n\n\n\n\n\n\n\\section{Research Methods}\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Intro}\n\nGabor systems are fundamental objects in time-frequency analysis. Given a set $\\Lambda \\subset \\mathbb{R}^{2l}$ and a function $g \\in L^2(\\mathbb{R}^l)$, the Gabor system $G(g, \\Lambda)$ is defined as \n\\begin{eqnarray*} G(g,\\Lambda) &=& \\{ g(x-m) e^{2\\pi i n\\cdot x}\\}_{(m,n) \\in \\Lambda}. \\end{eqnarray*}\nWhen $\\Lambda$ is taken to be $\\mathbb{Z}^{2l}$, $G(g)=G(g,\\mathbb{Z}^{2l})$ is referred to as the \\emph{integer lattice Gabor system} generated by $g$. The Balian-Low theorem (BLT) and its generalizations are uncertainty principles concerning the generator $g$ of such a system in the case that $G(g,\\Lambda)$ forms a Riesz basis. \n\\begin{theorem}[BLTs]\\label{thm:BLT}\n\tLet $g\\in L^2(\\mathbb{R})$ and suppose that the Gabor system $G(g)=G(g,\\mathbb{Z}^2)$ is a Riesz basis for $L^2(\\mathbb{R})$. \n\t\\begin{enumerate}\n\t\t\\item[(i)] If $10$, depending only on $A$ and $B$, such that for any $N\\ge 2$ and for any $b\\in \\ell_2^d$ which generates an $A,B$-Gabor Riesz basis for $\\ell_2^d$, \n\t\\begin{eqnarray*} c_{AB}\\log(N) &\\le& \\|N\\Delta b\\|_{\\ell_2^d}^2 + \\|N \\Delta \\mathcal{F}_d(b)\\|_{\\ell_2^d}^2. \\end{eqnarray*}\n\tConversely, there exists a constant $C_{AB}$ such that for any $N\\ge2$, there exists $b\\in \\ell_2^d$ which generates and $A,B$-Gabor Riesz basis for $\\ell_2^d$ such that \n\t\\begin{eqnarray*} \\|N\\Delta b\\|_{\\ell_2^d}^2 + \\|N \\Delta \\mathcal{F}_d(b)\\|_{\\ell_2^d}^2 &\\le& C_{AB}\\log(N). \\end{eqnarray*}\n\\end{theorem}\n\n\nNitzan and Olsen also show that the continuous BLT, Theorem \\ref{thm:BLT}, follows from this discrete version and that the following \\emph{Finite Quantitative BLT} also holds.\n\\begin{theorem}[Theorem 5.3, \\cite{Nitzan}]\\label{thm:FiniteQuantBLT}\n\tLet $A,B>0$. There exists a constant $C_{AB}>0$ such that the following holds. Let $N \\ge 200\\sqrt{B\/A}$ and let $b \\in \\ell_2^d$ generate an $A,B$-Gabor Riesz basis. Then, for all positive integers $1\\le Q, R\\le (N\/16)\\sqrt{A\/B}$, we have \n\t\\begin{eqnarray*} \\frac{1}{N} \\sum_{j=NQ}^{d-1} |b(j)|^2 + \\frac{1}{N} \\sum_{k=NR}^{d-1} |\\mathcal{F}_d b(k)|^2 &\\ge& \\frac{C_{AB}}{QR}. \\end{eqnarray*}\n\\end{theorem}\n\n\n\\subsection{Extension to Several Variables}\nThe first goal of this paper is to extend Theorem \\ref{thm:FiniteBLT} and \\ref{thm:FiniteQuantBLT} to several variables, which we state below in Theorems \\ref{thm:FiniteBLTHD} and \\ref{thm:FiniteQuantBLTHD}. \n\nWe consider complex-valued sequences on $\\mathbb{Z}_d^l=\\mathbb{Z}_d \\times \\cdots \\times \\mathbb{Z}_d$ for $l\\ge 1$, and we denote the set of all such sequences as $\\ell_2^{d,l}$. The view of these sequences as samples of a continuous $g \\in L^2([-\\frac{N}{2},\\frac{N}{2}]^l)$, where $b(\\v{j})=g(\\v{j}\/N)$ for $\\v{j}=(j_1,...,j_l)\\in I_d^l$ leads to the normalization \n\\begin{eqnarray*} \\|b\\|_{\\ell_2^{d,l}}^2 \\ =\\ \\frac{1}{N^l} \\sum_{\\v{j} \\in \\mathbb{Z}_d^l} |b(\\v{j})|^2 \\ =\\ \\frac{1}{N^l} \\sum_{\\v{j} \\in I_d^l} |b(\\v{j})|^2. \\end{eqnarray*}\nThe discrete Fourier transform, $\\mathcal{F}_{d,l}$, on $\\ell_2^{d,l}$, is given by \n\\begin{eqnarray*} \\mathcal{F}_{d,l}(b)(\\v{k}) &=& \\frac{1}{N^l} \\sum_{\\v{j}\\in \\mathbb{Z}_d^l} b(\\v{j}) e^{-2\\pi i \\frac{\\v{j}\\cdot \\v{k}}{d}}. \\end{eqnarray*}\nUnder this normalization, $\\mathcal{F}_{d,l}$ is an isometry on $\\ell_2^{d,l}$. The Gabor system generated by $b$, $G_{d,l}(b)$ is given by \n\n\\begin{eqnarray*} G_{d,l}(b) \\ =\\ \\{ b(\\v{j}-N\\v{n}) e^{2\\pi i \\frac{\\v{j}\\cdot \\v{m}}{N}}\\}_{(\\v{n},\\v{m})\\in \\{0,...,N-1\\}^{2l}}\\ =\\ \\{ b(\\v{j}-\\v{n})e^{2\\pi i \\frac{\\v{j}\\cdot \\v{m}}{d}}\\}_{(\\v{n},\\v{m})\\in (N\\mathbb{Z}_d)^{2l}}. \\end{eqnarray*} \n\nFor any $k \\in \\{1,...,l\\}$, let $\\Delta_k: \\ell_2^{d,l}\\rightarrow \\ell_2^{d,l}$ be defined by \n\\begin{eqnarray*} \\Delta_k b (\\v{j}) &=& b(\\v{j}+\\v{e}_k)-b(\\v{j}), \\end{eqnarray*} \nwhere $\\{\\v{e}_k\\}_{k \\in \\{1,...,l\\}}$ is the standard orthonormal basis for $\\mathbb{R}^l$. Then $N \\Delta_k b$ approximates the partial derivative $\\frac{\\partial g}{\\partial x_k}$. \n\n\n\nWe have the following generalization of Theorem \\ref{thm:FiniteBLT}. \n\\begin{theorem} \\label{thm:FiniteBLTHD}\n\tFix constants $00$, we let $\\{ |j_k|\\ge t\\}$ denote the set $\\{ \\v{j} \\in I_d^l: |j_k|\\ge t\\}$. \n\n\n\n\\begin{theorem}\\label{thm:FiniteQuantBLTHD}\n\tLet $A, B>0$ and $l \\in \\mathbb{N}$. There exists a constant $C>0$ depending only on $A$, $B$, and $l$, such that the following holds. Let $N\\ge 200\\sqrt{B\/A}$ and let $b \\in \\ell_2^{d,l}$ generate an $A,B$-Gabor Riesz basis for $\\ell_2^{d,l}$. Then, for any $1 \\le k \\le l$ and all integers $1\\le Q, R\\le (N\/16) \\sqrt{A\/B}$, we have \n\t\\begin{eqnarray*} \\frac{1}{N^l} \\sum_{|j_k|\\ge \\frac{NR}{2}} |b(\\v{j})|^2 + \\frac{1}{N^l} \\sum_{|j_k|\\ge \\frac{NQ}{2}} |\\mathcal{F}_{d,l}b (\\v{j})|^2 &\\ge& \\frac{C}{QR}. \\end{eqnarray*}\n\\end{theorem}\n\n\n\n\n\\subsection{Finite Nonsymmetric BLTs}\nIn Section \\ref{FQBLTHDapps}, we prove nonsymmetric versions of the finite BLT. In the process, we show that symmetric and nonsymmetric versions of the finite BLT follow as corollaries of the finite quantitative BLT (Theorem \\ref{thm:FiniteQuantBLTHD}), as long as $N$ is sufficiently large.\n\n\n\n\\begin{theorem}[Nonsymmetric Finite BLT]\\label{thm:NonsymFiniteBLT}\n\tLet $A,B>0$ and $10$, depending only on $A, B, p$ and $q$ such that the following holds. Let $N \\ge 200\\sqrt{B\/A}$. Then, for any $b\\in \\ell_2^{d,l}$ which generates an $A,B$-Gabor Riesz basis for $\\ell_2^{d,l}$, \n\t\\begin{eqnarray*} C\\log(N) &\\le& \\frac{1}{N^{l}} \\sum_{\\v{j} \\in I_{d}^l} \\left|\\frac{j_k}{N}\\right|^p |b(\\v{j})|^2+ \\frac{1}{N^{l}} \\sum_{\\v{j} \\in I_{d}^l} \\left|\\frac{j_k}{N}\\right|^q |b(\\v{j})|^2. \\end{eqnarray*} \n\\end{theorem}\n\n\\begin{remark}\n\tTheorem \\ref{thm:NonsymFiniteBLT} gives a finite dimensional version of the nonsymmetric BLT for parameters satisfying $11$, then \n\t\t\\begin{eqnarray*} \\frac{C}{\\tau-1} &\\le& \\int_{\\mathbb{R}^l} |x_k|^p |g(x)|^2 dx + \\int_{\\mathbb{R}^l} |\\xi_k|^q |\\widehat{g}(\\xi)|^2d\\xi.\\end{eqnarray*}\n\t\\end{enumerate}\n\\end{theorem}\n\nWhen the bound $2\\le T <\\infty$ is replaced by $1<\\gamma \\le T<\\infty$, the bound $\\frac{C(1-2^{\\tau-1})}{(1-\\tau)} T^{1-\\tau}$ in part \\textit{(i)} can be replaced by $\\frac{C(1-\\gamma^{\\tau-1})}{(1-\\tau)} T^{1-\\tau}$. In Section \\ref{FQBLTHDapps} we extend this theorem to the case where either $p=\\infty$ or $q=\\infty$.\n\nThe first and second inequalities in Theorem \\ref{thm:QuantBLTCorollaryNonsymmetric} quantify the growth of `localization' quantities in terms of cutoff weights of the form $\\min(|x_k|^p, T)$. The $\\log$ term in the second inequality shows a connection between the continuous BLT and its finite dimensional versions. The last inequality, on the other hand, shows that generators of Gabor Riesz bases must satisfy a Heisenberg type uncertainty principle for every $0< p \\le 2$. A similar inequality is known to hold for arbitrary $L^2(\\mathbb{R})$ functions by a result of Cowling and Price \\cite{CP}. However, for generators of Gabor Riesz bases, we have explicit estimates on the dependence of the constant on $\\tau$ and the result here is stated for higher dimensions.\n\n\n\n\n\n\n\n\n\\section{Preliminaries: The Zak Transform and Quasiperiodic Functions} \\label{xcom2}\n\nThe Zak transform is an essential tool for studying lattice Gabor systems. The discrete Zak transform $Z_{d,l}$ of $b\\in \\ell_2^{d,l}$ for $(\\v{m},\\v{n}) \\in \\mathbb{Z}_d^{2l}$ is given by\n\n\\begin{align*} Z_{d,l}(b)(\\v{m}, \\v{n}) &= \\sum_{\\v{j} \\in \\{0,...,N-1\\}^l} b(\\v{m}-N\\v{j}) e^{2\\pi i \\frac{\\v{n}\\cdot \\v{j}}{N}}=\\sum_{\\v{j} \\in N\\mathbb{Z}_d^l} b(\\v{m}-\\v{j})e^{2\\pi i \\frac{\\v{n} \\cdot \\v{j}}{d}}. \\end{align*} \n\n\n\nThe following properties show that $Z_{d,l}(b)$ encodes basis properties of $G_{d,l}(b)$, while retaining information about `smoothness' (see the remark following Proposition \\ref{prop:ZakProperties}) of $b$ and $\\mathcal{F}_{d,l}(b)$. Note that $Z_{d,l}(b)(\\v{m},\\v{n})$ is defined for $(\\v{m},\\v{n})\\in \\mathbb{Z}_d^{2l}$ and is $d$-periodic in each of its $2l$ variables. However, the Zak transform satisfies even stronger periodicity conditions. \nIn fact, $Z_{d,l}(b)$ is \\emph{$N$-quasiperiodic on $\\mathbb{Z}_d^{2l}$}, that is\n\\begin{eqnarray}\nZ_{d,l}(b) (\\v{m}+N\\v{e}_k, \\v{n})&=& e^{2\\pi i \\frac{n_k}{N}} Z_{d,l}(b)(\\v{m},\\v{n}), \\label{eqn:quasiperiodic}\\\\\nZ_{d,l}(b) (\\v{m}, \\v{n}+N\\v{e}_k)&=& Z_{d,l}(b)(\\v{m},\\v{n}). \\nonumber\n\\end{eqnarray}\nLet $S_N=\\{0,...,N-1\\}$. Then, the quasi-periodicity conditions above show that $Z_{d,l}(b)$ is completely determined by its values on $S_N^{2l}$. \n\nWe will use the notation $\\ell_2(S_N^{2l})$ to denote the set of sequences $W(\\v{m},\\v{n})$ defined on $S_N^{2l}$ with norm given by \n\\[ \\|W\\|_{\\ell_2(S_N^{2l})}^2 \\ =\\ \\frac{1}{N^{2l}} \\sum_{(\\v{m}, \\v{n}) \\in S_N^{2l}} |W(\\v{m},\\v{n})|^2,\\]\nwhere here we keep the variables $\\v{m}$ and $\\v{n}$ separate due to the connection with the Zak transform. The normalization is chosen so that if $W$ is a sampling of a function $h(\\v{x}, \\v{y})$ on $[0,1]^{2l}$, then $\\|W\\|_{\\ell_2(S_N^{2l})}$ approximates the $L^2([0,1]^{2l})$ norm of $h$. \n\nThe Zak transform has many other important properties, some of which we collect in the next proposition. Arguments for these facts are standard and presented in \\cite{AGT} and \\cite{Nitzan}, for instance.\n\\begin{proposition}\\label{prop:ZakProperties}\n\tLet $b \\in \\ell_2^{d,l}$. \n\t\\begin{enumerate}\n\t\t\\item[(i)] $Z_{d,l}$ is a unitary mapping from $\\ell_2^{d,l}$ onto $\\ell_2(S_N^{2l})$. \n\t\t\\item[(ii)] A sequence $b\\in \\ell_2^{d,l}$ generates an $A,B$-Gabor Riesz basis for $\\ell_2^{d,l}$ if and only if $Z_{d,l}(b)$ satisfies \n\t\t\\begin{eqnarray*} A \\ \\le\\ |Z_{d,l}(b)(\\v{m}, \\v{n}) |^2 \\ \\le\\ B, \\text{ for } (\\v{m}, \\v{n}) \\in \\mathbb{Z}_d^{2l}. \\end{eqnarray*}\n\t\t\\item[(iii)] Let $\\widehat{b}\\ =\\ \\mathcal{F}_{d,l}(b)$. Then,\n\t\t\\begin{eqnarray*} Z_{d,l}(\\widehat{b})(\\v{m},\\v{n})\\ =\\ e^{2\\pi i \\frac{\\v{m}\\cdot \\v{n}}{d}} Z_{d,l}(b)(-\\v{n},\\v{m}). \\end{eqnarray*}\n\t\t\\item[(iv)] For $a,b \\in \\ell_2^{d,l}$ define $(a \\ast b)(\\v{k})= \\frac{1}{N^l} \\sum_{j \\in \\mathbb{Z}_d^{l}} a(\\v{k}-\\v{j})b(\\v{j})$. Then, \n\t\t\\begin{eqnarray*} Z_{d,l}(a\\ast b)(\\v{m}, \\v{n})\\ =\\ \\frac{1}{N^l} \\sum_{\\v{j} \\in \\mathbb{Z}_d^l} b(\\v{j}) Z_{d,l}(a)(\\v{m}-\\v{j},\\v{n}) \\ =\\ (Z_{d,l}(a)\\ast_1 b) (\\v{m},\\v{n}), \\end{eqnarray*}\n\t\twhere $\\ast_1$ denotes convolution of $b$ with respect to the first set of variables of $Z_{d,l}(a)$, $\\v{m}$, keeping the second set, $\\v{n}$, fixed.\n\t\t\n\t\\end{enumerate}\n\\end{proposition}\n\n\\begin{remark} We will be interested in the `smoothness' of $b$ and $Z_{d,l}(b)$ for $b \\in \\ell_2^{d,l}$. Since these are functions on discrete sets, smoothness is not well defined, but we use the term in relation to the size of norms of certain difference operators defined on $\\ell_2^{d,l}$ and $\\ell_2(S_N^{2l})$, which mimic norms of partial derivatives of differentiable functions. \\end{remark}\n\nFor $1\\le k \\le l$ and any $N$-quasiperiodic function on $\\mathbb{Z}_d^l$, let $\\Delta_k,\\Gamma_k$ be defined as follows:\n\\begin{eqnarray*}\n\t\\Delta_k W(\\v{m},\\v{n}) &=& W(\\v{m}+\\v{e_k},\\v{n})-W(\\v{m},\\v{n}),\\\\\n\t\\Gamma_k W(\\v{m},\\v{n}) &=& W(\\v{m},\\v{n}+\\v{e_k})-W(\\v{m},\\v{n}).\n\\end{eqnarray*}\nFor $b \\in \\ell_2^{d,l}$ define $\\alpha_k(b)$ and $\\beta_k(b)$ by\n\n\\begin{eqnarray*}\n\t\\alpha_k(b)&=& \\|N\\Delta_k b\\|_{\\ell_2^{d,l}}^2 + \\|N \\Delta_k \\mathcal{F}_{d,l}(b)\\|_{\\ell_2^{d,l}}^2,\\\\\n\t\\beta_k(b)&=& \\frac{1}{N^{2l}} \\sum_{(\\v{m},\\v{n})\\in S_N^{2l}} |N\\Delta_k Z_{d,l}(b)(\\v{m},\\v{n})|^2+\\frac{1}{N^{2l}} \\sum_{(\\v{m},\\v{n})\\in S_N^{2l}} |N\\Gamma_k Z_{d,l}(b)(\\v{m},\\v{n})|^2.\n\\end{eqnarray*}\n\nThe following proposition shows that $\\alpha_k(b)$ and $\\beta_k(b)$ are essentially equivalently sized. Proposition 4.1 in \\cite{Nitzan} proves this for the case $l=k=1$, and it is readily checked that the proof carries over directly to the $l>1$ setting. \n\\begin{proposition}\\label{prop:alphabeta}\n\tLet $B>0$ and let $b\\in \\ell_2^{d,l}$ be such that $|Z_{d,l}(b)(\\v{m},\\v{n})|^2 \\le B$ for all $(\\v{m},\\v{n})\\in \\mathbb{Z}_d^{2l}$. Then, for all integers $N\\ge 2$ and any $1 \\le k \\le l$, we have \n\t\\begin{eqnarray*} \\frac{1}{2} \\beta_{k}(b) -8\\pi^2 B\\ \\le\\ \\alpha_{k}(b) \\ \\le\\ 2 \\beta_{k}(b) + 8 \\pi^2 B. \\end{eqnarray*}\n\\end{proposition}\nWe thus see that in order to bound $\\alpha_k(b)$ as in Theorem \\ref{thm:FiniteBLTHD}, it is sufficient to bound $\\beta_k(b)$. For $b \\in \\ell_2^d= \\ell_2^{d,1}$, let $\\beta(b)= \\beta_{1}(b)$, and let \n\\begin{eqnarray*}\n\t\\beta_{A,B}(N)=\\inf\\{ \\beta(b)\\},\n\\end{eqnarray*}\nwhere the infimum is taken over all $b \\in \\ell_2^d$ such that $b$ generates an $A,B$-Gabor Riesz basis. \n\\begin{theorem}[Theorem 4.2, \\cite{Nitzan}]\\label{thm:betabound1d}\n\tThere exist constants $00$ such that for any $N\\ge 2$, there exists a $b \\in \\ell_2^d$ such that $G_d(b)$ is an orthonormal basis for $\\ell_2^d$ and \n\t\\[ \\beta(b)\\ =\\ \\sum_{(m,n)\\in S_N^2} \\left|\\Delta Z_{d,1}(b)(m,n) \\right|^2+\\sum_{(m,n)\\in S_N^2} \\left|\\Gamma Z_{d,1}(b)(m,n) \\right|^2 \\le C \\log (N).\\]\n\t\n\tFor $\\v{j} \\in \\mathbb{Z}_d^l$, let $b_l(\\v{j})= b(j_1) b(j_2)\\cdots b(j_l)$. Then,\n\t\\[ Z_{d,l}(b_l)(\\v{m},\\v{n})\\ =\\ Z_{d,1}(b)(m_1, n_1) \\cdots Z_{d,1}(b)(m_l,n_l).\\]\n\tSince $G_d(b)$ is an orthonormal basis for $\\ell_2^d$, $Z_{d,l}(b_{l})$ is unimodular, and therefore, $G_{d,l}(b_l)$ is an orthonormal basis for $\\ell_2^{d,l}$ by Proposition \\ref{prop:ZakProperties}. We have, $\\beta_{1}(b_l)$ is equal to\n\t\n\t\\begin{gather*} \\frac{1}{N^{2(l-1)}} \\sum_{(\\v{m}',\\v{n}')\\in \\mathbb{Z}_N^{2(l-1)}} \\left[\\sum_{(m_1,n_1)\\in S_N^2} \\left|\\Delta Z_{d,1}(b)(m_1,n_1) \\right| ^2 + \\sum_{(m_1,n_1)\\in S_N^2} \\left|\\Gamma Z_{d,1}(b)(m_1,n_1) \\right| ^2\\right] \\\\\n \\le C\\log(N). \\end{gather*}\n\\end{proof}\n\n\nTheorem \\ref{thm:FiniteBLTHD} follows by combining Theorem \\ref{thm:betaboundhd} with Proposition \\ref{prop:alphabeta}.\n\n\n\n\n\\section{Proof of Theorem \\ref{thm:FiniteQuantBLTHD}} \\label{sec:proofFiniteQuantBLTHD}\n\n\nIn establishing a Finite Quantitative BLT for several variables, we follow a similar argument used to prove the one variable version (from \\cite{Nitzan}), but there are some necessary updates to certain parts of the proof. We include the details here for completeness. \n\nWe start with a straightforward bound on the `smoothness' of $Z_{d,l}(b\\ast \\phi)$. This observation is analogous to Lemma 2.6 of \\cite{Nitzan}. Let $\\|\\phi\\|_{\\ell_1^{d,l}} = \\frac{1}{N^l} \\sum_{\\v{j}\\in \\mathbb{Z}_d^l} |\\phi(\\v{j})|$, and for $a, b \\in \\ell_2^{d,l}$, recall that $(a \\ast b)(\\v{k})= \\frac{1}{N^l} \\sum_{\\v{j} \\in \\mathbb{Z}_d^l} a(\\v{k}-\\v{j})b(\\v{j})$. \n\\begin{lemma}\\label{lem:convboundzd}\n\tSuppose $b, \\phi \\in \\ell_2^{d,l}$ are such that $|Z_{d,l}(b)|^2 \\le B$ everywhere. Then, for any integer $t$, \n\t\\begin{eqnarray*}\n\t\t|Z_{d,l}(b\\ast \\phi) (\\v{m}+t\\v{e}_k,\\v{n})-Z_{d,l}(b\\ast \\phi) (\\v{m}, \\v{n})| \\le \\frac{\\sqrt{B} |t|}{N} \\|N \\Delta_k \\phi\\|_{\\ell_1^{d,l}}.\n\t\\end{eqnarray*}\n\\end{lemma}\n\n\\begin{proof}\n\tFrom Proposition \\ref{prop:ZakProperties}, we have \n\t\\begin{eqnarray*}\n\t\tZ_{d,l}(b\\ast \\phi)(\\v{m}, \\v{n}) = \\frac{1}{N^l} \\sum_{\\v{j} \\in \\mathbb{Z}_d^l} \\phi(\\v{j}) Z_{d,l}(b)(\\v{m}-\\v{j}, \\v{n})= Z_{d,l}(b) \\ast_1 \\phi(\\v{m},\\v{n}).\n\t\\end{eqnarray*}\n\tTherefore, we have \n\t\\begin{eqnarray*}\n\t\t& & |Z_{d,l}(b\\ast \\phi) (\\v{m}+t\\v{e}_k,\\v{n})-Z_{d,l}(b\\ast \\phi) (\\v{m}, \\v{n})|\\\\\n\t\t&\\le& \\sum_{s=0}^{t-1} |Z_{d,l}(b\\ast \\phi) (\\v{m}+(s+1)\\v{e}_k,\\v{n})-Z_{d,l}(b\\ast \\phi) (\\v{m}+s\\v{e}_k, \\v{n})|\\\\\n\t\t&=& \\sum_{s=0}^{t-1} \\left| \\frac{1}{N^l} \\sum_{\\v{j}\\in \\mathbb{Z}_d^l} Z_{d,l}(b)(\\v{j}, \\v{n}) [ \\phi (\\v{m}+(s+1)\\v{e}_k -\\v{j}) -\\phi (\\v{m}+s\\v{e}_k -\\v{j})] \\right|\\\\\n\t\t&\\le& \\sum_{s=0}^{t-1} \\frac{\\sqrt{B}}{N^l} \\sum_{\\v{j}\\in \\mathbb{Z}_d^l} |\\Delta_k \\phi (\\v{m}+s\\v{e}_k - \\v{j})|\\ =\\ \\frac{\\sqrt{B}}{N} t \\|N\\Delta_k \\phi\\|_{\\ell_1^{d,l}}.\n\t\\end{eqnarray*} \n\\end{proof}\n\nNext we extend the following Lemma 5.2 of \\cite{Nitzan} to higher dimensions. The adjustments to this lemma for the higher dimensional setting are minimal, however we state the one-dimensional and multi-variable versions separately for comparison.\n\n\\begin{lemma}[Lemma 5.2, \\cite{Nitzan}]\\label{conv-lemma}\nLet $A, B>0$ and $N\\geq 200 \\sqrt{B\/A}$. There exist positive constants $\\delta=\\delta(A)$ and $C=C(A,B)$ such that the following holds (with $d=N^2$). Let\n\\begin{itemize}\n\\item[(i)] \\quad $Q, R \\in \\mathbb{Z}$ such that $1\\leq Q,R \\leq (N\/16) \\cdot \\sqrt{A\/B}$,\n\\item[(ii)] \\quad $\\phi,\\psi \\in \\ell_2^d$ such that $\\sum_n|\\Delta\\phi(n)|\\leq 10 R$ and $\\sum_n|\\Delta\\psi(n)|\\leq 10 Q$,\n\\item[(iii)] \\quad $b\\in \\ell_2^d$ such that $A \\leq |Z_d (b)|^2 \\leq B$.\n\\end{itemize}\nThen, there exists a set $S\\subset ([0,N-1]\\cap\\mathbb{Z})^2$ of size $|S|\\geq C N^2\/ QR$\n such that all $(u,v)\\in S$ satisfy either\n\\begin{align}\\label{conv-ineq-1}\n|Z_d(b)(u,v)-Z_d(b\\ast \\phi)(u,v)|\\geq\\delta, \\qquad \\text{or} \\\\[2mm]\n\\label{conv-ineq-2}\n|Z_d( \\mathcal{F}_d b)(u,v)-Z_d(( \\mathcal{F}_d b)\\ast \\psi)(u,v)|\\geq\\delta.\n\\end{align}\n\\end{lemma}\n\n\n\\begin{lemma}\\label{lem:generatorConvSmooth}\n\tLet $A,B>0$, $1\\le k \\le l$, and $N\\ge 200\\sqrt{B\/A}$. There exist positive constants $\\delta=\\delta(A)$ and $C=C(A, B)$, such that the following holds. Let \n\t\\begin{enumerate}\n\t\t\\item[(i)] $Q, R \\in \\mathbb{Z}$ be such that $1\\le Q, R \\le \\frac{N}{16} \\sqrt{\\frac{A}{B}}$\n\t\t\\item[(ii)] $\\phi, \\psi \\in \\ell_2^{d,l}$ be such that $\\|N\\Delta_k \\phi\\|_{\\ell_1^{d,l}} \\le 10 R$ and $\\|N\\Delta_k \\psi\\|_{\\ell_1^{d,l}} \\le 10 Q$\n\t\t\\item[(iii)] $b \\in \\ell_2^{d,l}$ be such that $A\\le |Z_{d,l}(b)|^2 \\le B$. \n\t\\end{enumerate}\n\tThen, there exists a set $S\\subset ([0,N-1]\\cap \\mathbb{Z})^{2l}$ of size $|S|\\ge CN^{2l}\/Q R$ such that all $(\\v{u}, \\v{v}) \\in S$ satisfy either \n\t\\begin{eqnarray}\n\t|Z_{d,l}(b) (\\v{u}, \\v{v}) - Z_{d,l}(b\\ast \\phi) (\\v{u}, \\v{v}) |&\\ge& \\delta, \\text{ or}\\label{eqn:first} \\\\\n\t|Z_{d,l}(\\mathcal{F}_{d,l}b) (\\v{u}, \\v{v}) - Z_{d,l}((\\mathcal{F}_{d,l}b)\\ast \\psi) (\\v{u}, \\v{v}) |&\\ge& \\delta. \\label{eqn:second}\n\t\\end{eqnarray}\n\\end{lemma}\n\n\n\n\\begin{proof}\t\t\n\tWithout loss of generality, we prove this for $k=1$. \n\t\n\tAs in Lemma 5.2 of \\cite{Nitzan}, let $\\delta_1= 2\\sqrt{A} \\sin(\\pi ( \\frac{1}{4}-\\frac{1}{200}))$. Also, choose $K$ and $L$ to be the smallest integers satisfying \n\t\\[ \\frac{200 \\sqrt{B} R}{9\\delta_1}\\le K \\le N\\ \\ \\ \\ \\text{and}\\ \\ \\ \\ \\frac{\\sqrt{B}}{\\delta_1} \\max\\left\\{ \\frac{200 Q}{9}, 80 \\pi\\right\\} \\le L \\le N.\\]\n\t\n\tFor $s, t \\in \\mathbb{Z}$, let \n\t\\[ \\sigma_s=\\left[ \\frac{sN}{K}\\right],\\ \\ \\ \\ \\ \\text{and} \\ \\ \\ \\ \\ \\omega_t=\\left[ \\frac{tN}{L}\\right], \\]\n\tand let $\\Sigma=\\inf_s \\{ \\sigma_{s+1}-\\sigma_s\\}\\ge \\left[\\frac{N}{K}\\right] \\ge \\frac{N}{K}$, $\\Omega = \\inf_t\\{ \\omega_{t+1}-\\omega_t\\}\\ge \\frac{N}{L}$. Then, we have \n\t\\[ \\Sigma \\Omega \\ge C_1 \\frac{N^2}{QR},\\]\n\twhere $C_1$ can be chosen to be \n\t\\[ C_1= \\left[(\\frac{200 \\sqrt{B}}{9\\delta_1}+1)(\\frac{\\sqrt{B}}{\\delta_1} \\max(\\frac{200}{9},80\\pi) +1)\\right]^{-1}.\\]\n\t\n\tWe recall the following definition from \\cite{Nitzan}. For $(u,v) \\in ([0,\\Sigma-1] \\cap \\mathbb{Z}) \\times ([0,\\Omega-1]\\cap \\mathbb{Z})$, let \n\t\\[ \\text{Lat}(u,v)=\\{(u+\\sigma_s,v+\\omega_t):(s,t)\\in ([0,K-1]\\cap \\mathbb{Z})\\times ([0,L-1]\\cap \\mathbb{Z})\\},\\]\n\tand \n\t\\[ \\text{Lat}^*(u,v)=\\{(N-v-\\omega_t,u+\\sigma_s):(s,t)\\in ([0,K-1]\\cap \\mathbb{Z})\\times ([0,L-1]\\cap \\mathbb{Z})\\}.\\]\n\tNote that $\\text{Lat}(u,v)$ and $\\text{Lat}(u',v')$ are disjoint for distinct $(u,v)$ and $(u',v')$, and similarly for $\\text{Lat}^*(u,v)$. However, it is possible that $\\text{Lat}(u,v) \\cap \\text{Lat}^*(u',v') \\neq \\emptyset$ for some $(u,v)$ and $(u',v')$.\n\t\n\tNow similarly, for any $(\\v{m}', \\v{n}') \\in ([0,N-1]\\cap \\mathbb{Z})^{2(l-1)}$, let\n\t\\[ \n\t\\text{Lat}_{(\\v{m}',\\v{n}')}(u,v)= \\{ ((m_1,\\v{m}'),(n_1,\\v{n}')): (m_1,n_1)\\in \\text{Lat}(u,v)\\},\\] \n\tand\n\t\\begin{eqnarray*}\n\t\t\\text{Lat}^*_{(\\v{m}',\\v{n}')}(u,v)= \\{ ((n_1,N-\\v{n}'),(m_1,\\v{m}')):(n_1,m_1)\\in \\text{Lat}^*(u,v) \\}.\n\t\\end{eqnarray*}\n\tHere, by $N-\\v{n'}$ we mean $(N-n'_1, N-n'_2,..., N-n'_{l-1})$. We have that $\\text{Lat}_{(\\v{m}',\\v{n}')}(u,v) \\cap \\text{Lat}_{(\\v{m}'',\\v{n}'')}(u',v')=\\emptyset$ unless it holds that $((u,\\v{m}'),(v,\\v{n}'))=$ $ ((u',\\v{m}''),(v',\\v{n}''))$, and similar properties for $\\text{Lat}^*_{(\\v{m}',\\v{n}')}(u,v)$.\n\t\n\tNow, fix $(\\v{m}', \\v{n}') \\in ([0,N-1]\\cap \\mathbb{Z})^{2(l-1)}$, and consider \\[T(m_1, n_1)=T_{\\v{m}',\\v{n}'}(m_1,n_1)=Z_{d,l}(b)( (m_1,\\v{m}'), (n_1, \\v{n}')),\\] for $(m_1, n_1) \\in \\mathbb{Z}_d^2$. Note that $T$ is $N$-quasiperiodic on $\\mathbb{Z}_d^2$, and satisfies $A\\le |T|^2 \\le B$. \n\t\n\tFor each $(u,v)\\in([0,\\Sigma-1] \\cap \\mathbb{Z}) \\times ([0,\\Omega-1]\\cap \\mathbb{Z})$, Corollary 3.6 of \\cite{Nitzan} guarantees at least one point $(s,t) \\in ([0,K-1] \\cap \\mathbb{Z}) \\times ([0,L-1]\\cap \\mathbb{Z})$ so that either \n\t\\begin{eqnarray}\n\t&|T(u+\\sigma_{s+1}, v+\\omega_t) - T(u+\\sigma_s,v+\\omega_t)| \\ge \\delta_1, \\text{ or} \\label{eqn:third}\\\\\n\t&|T(u+\\sigma_s, v+\\omega_{t+1}) - T(u+\\sigma_s,v+\\omega_t)| \\ge \\delta_1. \\label{eqn:fourth}\n\t\\end{eqnarray}\n\nWe now make a claim which will furnish the last part of the proof of the lemma.\n\t\n\t\\begin{claim}\n\t\tFor $u$, $v$, $\\sigma_s$, $\\omega_t$, $\\v{m}'$ and $\\v{n}'$ as above, \n\t\t\\begin{enumerate}\n\t\t\t\\item[(i)] If \\eqref{eqn:third} is satisfied, then there exists $(\\v{a},\\v{b})\\in \\text{Lat}_{(\\v{m}',\\v{n}')}(u,v)$ so that \\eqref{eqn:first} is satisfied for $\\delta=\\frac{\\delta_1}{20}$. \n\t\t\t\\item[(ii)] If \\eqref{eqn:fourth} is satisfied, then there exists $(\\v{a},\\v{b})\\in \\text{Lat}^*_{(\\v{m}',\\v{n}')}(u,v)$ so that \\eqref{eqn:second} is satisfied for $\\delta= \\frac{\\delta_1}{40}$\n\t\t\\end{enumerate}\n\t\\end{claim}\n\t\n\tBefore proving this claim, we show how to complete the proof of the lemma. For a fixed $(\\v{m}', \\v{n}')$ there are $\\Sigma \\Omega \\ge C_1 \\frac{N^2}{QR}$ distinct choices of $(u,v)$ to consider and each of them either falls in part (\\textit{i}) or (\\textit{ii}) of the claim. Let $S^1_{(\\v{m}',\\v{n}')}$ be the set of $(u,v)$ points which fall into category (\\textit{i}), and similarly let $S^2_{(\\v{m}', \\v{n}')}$ be the set of $(u,v)$ points which fall into category (\\textit{ii}). Then, for either $i=1,2$, we must have \n\t\\begin{equation}\\label{eqn:Sbound} \n\t|S^i_{(\\v{m}', \\v{n}')} |\\ \\ge\\ \\frac{C_1N^2}{2QR}.\n\t\\end{equation}\n\t\n\tNow, there are $N^{2(l-2)}$ possible choices of $(\\v{m}', \\v{n}')$. Let $S_1$ be the set of all $(\\v{m}', \\v{n}')$ such that \\eqref{eqn:Sbound} is satisfies with $i=1$, and let $S_2$ be the set of all $(\\v{m}', \\v{n}')$ such that \\eqref{eqn:Sbound} is satisfied with $i=2$. So at least one of $S_1$ or $S_2$ must contain $N^{2(l-2)}\/2$ elements. \n\t\n\tIn the case that $S_1$ contains this many elements (the $S_2$ case is nearly identical and left to the reader), since $\\text{Lat}_{(\\v{m}', \\v{n}')}(u,v)$ are disjoint for distinct $((u,\\v{m}'),(v,\\v{n}'))$, we find at least $\\frac{C_1N^{2l}}{4QR}= C\\frac{N^{2l}}{QR}$ distinct points all satisfy \\eqref{eqn:first} if $i=1$. The lemma is then proved conditioning on the claim above. We then establish finally the two part claim. \\\\\n\n\t\n\t\\textit{Proof of Claim.}\n\tFor both parts we use properties of the Zak transform detailed in Proposition \\ref{prop:ZakProperties}. First we show part \\textit{i)}. Let $H(u,v)=Z_{d,l}(b\\ast \\phi)((u,\\v{m}'),(v,\\v{n}'))$. Note that Lemma \\ref{lem:convboundzd} and the assumptions on $R$ and $\\|N \\Delta_1 \\phi\\|_{\\ell_1^{d,l}}$ imply that for any integer $t$ satisfying $t\\le \\frac{2N}{K}$, \n\t\\begin{eqnarray}\n\t|H(u+t,v)-H(u,v)|&\\le \\frac{2\\sqrt{B} }{K} \\|N \\Delta_1 \\phi\\|_{\\ell_1^{d,l}}\\le \\frac{ 20\\sqrt{B} R}{K}\\le \\frac{9\\delta_1}{10}. \\label{eqn:Hbound}\n\t\\end{eqnarray}\n\tSo, if \\eqref{eqn:third} is satisfied, using \\eqref{eqn:Hbound}, we have\n\t\\begin{eqnarray*}\n\t\t\\delta_1 &\\le& |T(u+\\sigma_{s+1}, v+\\omega_t) - T(u+\\sigma_s,v+\\omega_t)|\\\\\n\t\t&\\le& |T(u+\\sigma_{s+1}, v+\\omega_t) - H(u+\\sigma_{s+1}, v+\\omega_t)| \\\\\n\t\t& &\\ \\ \\ \\ \\ + \\frac{9\\delta_1}{10}+|T(u+\\sigma_{s}, v+\\omega_t) - H(u+\\sigma_{s}, v+\\omega_t)|.\n\t\\end{eqnarray*}\n\tUpon rearranging terms, we find \n\t\\begin{eqnarray*}\n\t\t\\frac{\\delta_1}{10} &\\le |T(u+\\sigma_{s+1}, v+\\omega_t) - H(u+\\sigma_{s+1}, v+\\omega_t)|+|T(u+\\sigma_{s}, v+\\omega_t) - H(u+\\sigma_{s}, v+\\omega_t)|,\n\t\\end{eqnarray*}\n\twhich shows that \\eqref{eqn:first} is satisfied for $\\delta'= \\frac{\\delta_1}{20}$, and for either $((u+\\sigma_{s+1},\\v{m}'),(v+\\omega_t,\\v{n}'))$ or $((u+\\sigma_{s},\\v{m}'),(v+\\omega_t,\\v{n}'))$. If $(u+\\sigma_{s+1}, v+\\omega_t)$ is not in $\\text{Lat}(u,v)$, by the N-quasiperiodicity of $T$, we may find another point in $\\text{Lat}(u,v)$ which satisfies the same bound. \n\t\n\tNow we prove part \\textit{ii)}. Letting $\\hat{b}= \\mathcal{F}_{d,l}(b)$, we have,\n\t\\begin{eqnarray*}\n\t\t\\delta_1&\\le& |T(u+\\sigma_s, v+\\omega_{t+1}) - T(u+\\sigma_s,v+\\omega_t)| \\\\\n\t\t&=& |Z_{d,l}(b)((u+\\sigma_s, \\v{m}'),(v+\\omega_{t+1},\\v{n}')) - Z_{d,l}(b)((u+\\sigma_s, \\v{m}'),(v+\\omega_{t},\\v{n}'))|\\\\\n\t\t&=& |Z_{d,l}(\\hat{b})((-v-\\omega_{t+1},-\\v{n}'),(u+\\sigma_s, \\v{m}'))\\\\\n\t\t&\\ &\\ - e^{-2\\pi i (\\omega_{t+1}-\\omega_t)(u+\\sigma_s) \/d} Z_{d,l}(\\hat{b})((-v-\\omega_{t},-\\v{n}'),(u+\\sigma_s, \\v{m}'))|\\\\\n\t\t&=& |Z_{d,l}(\\hat{b})((N-v-\\omega_{t+1},N-\\v{n}'),(u+\\sigma_s, \\v{m}'))\\\\\n\t\t&\\ &\\ - e^{-2\\pi i (\\omega_{t+1}-\\omega_t)(u+\\sigma_s) \/d} Z_{d,l}(\\hat{b})((N-v-\\omega_{t},N-\\v{n}'),(u+\\sigma_s, \\v{m}'))|,\n\t\\end{eqnarray*}\n\twhere we have used that $Z_{d,l}(b)(\\v{m},\\v{n})= e^{2\\pi i \\v{m}\\cdot \\v{n}\/d} Z_{d,l}(\\widehat{b})(-\\v{n},\\v{m})$ in the second step, and for the last step we have used $N$-quasiperiodicity. \n\t\n\tLet $\\widetilde{T}(v,u)=Z_{d,l}(\\hat{b})((v,N-\\v{n}'),(u, \\v{m}'))$, \n\tand $\\widetilde{H}(v,u)= Z_{d,l}(\\hat{b}\\ast \\psi)((v,N-\\v{n}'),(u, \\v{m}'))$.\n\tThen, \n\t\\begin{eqnarray*}\n\t\t\\delta_1 & \\le & |\\widetilde{T}(N-v-\\omega_{t+1},u+\\sigma_s)- e^{-2\\pi i (\\omega_{t+1}-\\omega_t)(u+\\sigma_s) \/d}\\widetilde{T}(N-v-\\omega_{t},u+\\sigma_s)|\\\\\n\t\t& \\le & | \\widetilde{T}(N-v-\\omega_{t+1},u+\\sigma_s)- \\widetilde{T}(N-v-\\omega_{t},u+\\sigma_s)| +\\frac{\\delta_1}{20}. \n\t\\end{eqnarray*}\n\tCombining these, we see that \n\t\\begin{eqnarray*}\n\t\t\\frac{19}{20} \\delta_1 &\\le& | \\widetilde{T}(N-v-\\omega_{t+1},u+\\sigma_s)- \\widetilde{T}(N-v-\\omega_{t},u+\\sigma_s)|. \n\t\\end{eqnarray*}\n\tArguing as in the first case above, and replacing $H$ by $\\widetilde{H}$ and $T$ by $\\widetilde{T}$, we find that either $((N-v-\\omega_{t+1},N-\\v{n}'),(u, \\v{m}'))$, or $((N-v-\\omega_{t},N-\\v{n}'),(u, \\v{m}'))$ satisfy \\eqref{eqn:second}, with $\\delta= \\frac{\\delta_1}{40}$. Again, using quasi-periodicity, we can guarantee that there is a point in $\\text{Lat}^*_{(\\v{m}',\\v{n}')}(u,v)$ satisfying \\eqref{eqn:second}. \n\t\n\\end{proof} \\vspace{2 mm}\n\n\n\nFinally, we follow the construction of \\cite{Nitzan} to create the functions $\\phi$ and $\\psi$ appearing in the previous lemma (Lemma \\ref{lem:generatorConvSmooth}) which in turn are used to prove Theorem \\ref{thm:FiniteQuantBLTHD}. Let $\\rho$ : $\\mathbb{R}\\rightarrow \\mathbb{R}$ be the inverse Fourier transform of \n\\[\\hat{\\rho}(\\xi)= \\begin{dcases} \\hspace{15 mm} 1,\\hspace{10 mm} |\\xi|\\leq 1\/2 \\\\ 2(1-\\xi sgn(\\xi)),\\hspace{5 mm} 1\/2\\leq |\\xi|\\leq 1 \\\\ \\hspace{15 mm} 0,\\hspace{10 mm} |\\xi|\\geq1\\end{dcases}. \\] \n\nFor $f \\in L^2(\\mathbb{R})$ satisfying $\\sup_{t \\in \\mathbb{R}} |t^2 f(t)|<\\infty$ and $\\sup_{\\xi \\in \\mathbb{R}} |\\xi^2 \\widehat{f}(t)|<\\infty$, let \n\\[ P_N f(t)\\ =\\ \\sum_{k=-\\infty}^\\infty f(t+kN)\\]\nand for an $N$-periodic continuous function $h$, let \n\\[ S_N h\\ =\\ \\{ h(j\/N)\\}_{j =0}^{d-1}.\\]\n\n\nLet $\\rho_R(t)=R\\rho(Rt)$. Fix $1\\le k \\le l$, and for $\\v{j} \\in I_d^l$ define the vector $\\v{j}'=(j_1,..., j_{k-1},j_{k+1},...,j_l) \\in I_{d}^{l-1}$, and let \n\\[ \\phi_{R,k}(\\v{j}) \\ =\\ N^{l-1} \\delta_{\\v{j}', \\v{0}} \\left(S_N P_N \\rho_{R} (j_k) \\right).\\]\nNow $\\phi_{R,k} (\\v{j})$ is equal to $ \\left(S_N P_N \\rho_{R} (j_k) \\right)$ when $j_i=0$ for each $i \\neq k$, and is zero otherwise. \n\\begin{lemma}\n\tLet $\\phi_{R,k}$ be as above for a positive integer $R$. Then,\n\t\\[ \\|N \\Delta_k \\phi_{R,k}\\|_{\\ell_1^{d,l}} \\ \\le\\ 10 R.\\]\n\\end{lemma}\n\\begin{proof}\n\tWe have \n\t\\begin{eqnarray*}\n\t\t\\|N \\Delta_k \\phi_{R,k}\\|_{\\ell_1^{d,l}}&=& \\frac{1}{N^l} \\sum_{\\v{j} \\in I_d^l} N| \\Delta_k \\phi_{R,k} (\\v{j})| \\ =\\ \\sum_{j_k \\in I_d} |\\Delta S_N P_N \\rho_{R}(j_k)|.\n\t\\end{eqnarray*}\n\tLemma 2.10 and Lemma 5.1 of \\cite{Nitzan} show that the right hand side is bounded by $10R$. \n\\end{proof}\n\nWe now have sufficient tools to prove the Finite Quantitative BLT, Theorem \\ref{thm:FiniteQuantBLTHD}.\n\n\\begin{proof}[Theorem \\ref{thm:FiniteQuantBLTHD}]\n\tFor simplicity we show the result for $k=1$. \n\tLet $R$ and $Q$ be integers such that $1 \\le R, Q \\le (N\/16)\\sqrt{A\/B}$. Let $\\phi= \\phi_{R,1}$ and $\\psi=\\phi_{Q,1}$, and note that Lemma \\ref{lem:convboundzd} shows that \n\t\\[ \\|N\\Delta_1\\phi \\|_{\\ell_1^{d,l}} \\le 10 R, \\text{ and } \\|N\\Delta_1\\psi \\|_{\\ell_1^{d,l}} \\le 10 Q.\\]\n\tProposition 2.8 of \\cite{Nitzan}, and the fact that $\\mathcal{F}_d(N\\delta_{j,0})(k)=1$ for all $k \\in I_d$, shows that \n\t\\begin{align}\n\t\\mathcal{F}_{d,l} (\\phi)(\\vec{k}) &= \\mathcal{F}_d(S_N P_N \\rho_{R})(k_1) \\nonumber \\\\\n\t\t& = (S_N P_N \\mathcal{F}(\\rho_{R})) (k_1) = (S_N P_N \\widehat{\\rho}(\\cdot\/R)) (k_1),\\label{eqn:phifunction}\n\t\\end{align}\n\tand since $R1$}. Finally, in this case\n\n\t\\begin{eqnarray*}\n\t\t\\frac{C}{\\tau-1}&=&C\\int_1^\\infty S^{-\\tau} dS\\\\\n\t\t&\\le& \\int_{\\mathbb{R}^{l-1}} \\int_0^\\infty \\int\\limits_{|x_1|\\ge S^{1\/p}} |g(x_1,x')|^2 dx_1 dS dx'+ \\int_{\\mathbb{R}^{l-1}} \\int_0^\\infty \\int\\limits_{|\\xi_1|\\ge S^{1\/q}} |\\widehat{g}(\\xi_1,\\xi')|^2 d\\xi_1 dS d\\xi'\\\\\n\t\t&=& \\int_{\\mathbb{R}^{l}} |x_1|^p |g(x)|^2 dx+ \\int_{\\mathbb{R}^{l}} |\\xi_1|^q |\\widehat{g}(\\xi)|^2 d\\xi.\n\t\\end{eqnarray*}\n\n\\end{proof}\n\nThe following result generalizes part (ii) of Theorem \\ref{thm:BLT}. \n\\begin{theorem}\\label{thm:compactFunctionQuantCorr}\n\tSuppose $1\\le p < \\infty$, and $g \\in L^2(\\mathbb{R}^l)$ is such that $G(g)= \\{e^{2\\pi i n \\cdot x} g(x-m)\\}_{(m,n) \\in \\mathbb{Z}^{2l}}$ is a Riesz basis for $L^2(\\mathbb{R}^l)$ and $g$ is supported in $(-M,M)^l$. Then, there exists a constant $C$ depending only on the Riesz basis bounds of $G(g)$ such that for any $1\\le k \\le 1$ and any $2 \\le T \\le \\infty$ each of the below hold.\n\t\\begin{enumerate}\n\t\t\\item[(i)] If $p>1$, then \n\t\t\\[ \\frac{C(1-2^{1\/p-1})}{M(1-1\/p)}\\ \\le\\ \\int_{\\mathbb{R}^l} \\min(|\\xi_k|^p, T) |\\widehat{g}(\\xi)|^2 d\\xi.\\]\n\t\t\\item[(ii)] If $p = 1$, then \n\t\t\\[ \\frac{C\\log(T)}{M}\\ \\le\\ \\int_{\\mathbb{R}^l} \\min(|\\xi_k|, T) |\\widehat{g}(\\xi)|^2 d\\xi.\\]\n\t\t\\item[(iii)] If $p<1$, then \n\t\t\\[ \\frac{C}{M(1\/p-1)}\\ \\le\\ \\int_{\\mathbb{R}^l} |\\xi_k|^p, |\\widehat{g}(\\xi)|^2 d\\xi.\\]\n\t\\end{enumerate}\n\tThis result also holds when $g$ and $\\widehat{g}$ are interchanged.\n\\end{theorem}\nThe proof is nearly identical to that of Theorem \\ref{thm:QuantBLTCorollaryNonsymmetric}, after noticing that by applying the quantitative BLT with $R=M$, the integral related to $|g(x)|^2$ is zero due to the support assumption. Note that letting $T \\rightarrow \\infty$ in part (ii) gives part (ii) of Theorem \\ref{thm:BLTHD}.\n\n\nFinally, we focus on the finite nonsymmetric BLT. For $1\\le p,q<\\infty$ and $b \\in \\ell_2^{d,l}$, let \n\\[ \\alpha_k^{p,q}(b)\\ =\\ \\frac{1}{N^{l}} \\sum_{\\v{j} \\in \\mathbb{Z}_{d}^l} \\left|\\frac{j_k}{N}\\right|^p |b(\\v{j})|^2+ \\frac{1}{N^{l}} \\sum_{\\v{j} \\in \\mathbb{Z}_{d}^l} \\left|\\frac{j_k}{N}\\right|^q |\\mathcal{F}_{d,l}b(\\v{j})|^2.\\] To give a finite dimensional analog of part (ii) of Theorem \\ref{thm:BLT}, it will be convenient to define $\\alpha^{p,\\infty}_{k}(b)$ and $\\alpha^{\\infty, q}_k(b)$ as\n\\[\\alpha^{p,\\infty}_{k}(b)\\ =\\ \\frac{1}{N^{l}} \\sum_{\\v{j} \\in \\mathbb{Z}_{d}^l} \\left|\\frac{j_k}{N}\\right|^p |b(\\v{j})|^2, \\ \\ \\alpha^{\\infty,q}_k(b)\\ =\\ \\frac{1}{N^{l}} \\sum_{\\v{j} \\in \\mathbb{Z}_{d}^{l}} \\left|\\frac{j_k}{N}\\right|^q |\\mathcal{F}_{d,l}b(\\v{j})|^2.\\]\n\n\n\\begin{theorem}\\label{thm:NonsymFiniteBLTwithInfinity}\n\tLet $A,B>0$ and $1\\le p,q< \\infty$ and let $\\tau=\\frac{1}{p}+\\frac{1}{q}$. Assume $b\\in \\ell_2^{d,l}$ generates an $A, B$-Gabor Riesz basis for $\\ell_2^{d,l}$. There exists a constant $C>0$, depending only on $A, B, p$ and $q$ such that the following holds. Let $N \\ge 200\\sqrt{B\/A}$. \n\t\\begin{enumerate}\n\t\t\\item[(i)] If $\\tau=\\frac{1}{p}+\\frac{1}{q}<1$,\n\t\t\\begin{eqnarray*} C \\frac{N^{1-\\tau}}{1-\\tau} &\\le& \\alpha^{p,q}_{k}(b).\\end{eqnarray*}\n\t\t\\item[(ii)] If $\\tau=\\frac{1}{p}+\\frac{1}{q}=1$, \n\t\t\\begin{eqnarray*} C \\log(N) &\\le& \\alpha^{p,q}_{k}(b.)\\end{eqnarray*}\n\t\t\\item[(iii)] If $\\tau=\\frac{1}{p}+\\frac{1}{q}>1$,\n\t\t\\begin{eqnarray*} C\\frac{1-(200\/16)^{1-\\tau}}{\\tau-1} &\\le& \\alpha^{p,q}_{k}(b).\\end{eqnarray*}\n\t\\end{enumerate}\n\tAlso, if $\\mathcal{F}_{d,l}(b)$ is supported in the set $(-\\gamma_N N\/2,\\gamma_N N\/2)\\cap \\mathbb{Z}$ where $\\gamma_N= \\lfloor (N\/16)\\sqrt{A\/B}\\rfloor$, then parts (i), (ii), and (iii) hold with $\\tau=\\frac{1}{p}$ and $\\alpha^{p,q}(b)$ replaced by $\\alpha^{p,\\infty}(b)$. Similarly, if $b$ is supported in the set $(-\\gamma_N N\/2,\\gamma_N N\/2)\\cap \\mathbb{Z}$ then parts (i), (ii), and (iii) hold with $\\tau=\\frac{1}{q}$ and $\\alpha^{p,q}(b)$ replaced by $\\alpha^{\\infty, q}(b)$. \n\\end{theorem}\n\n\nWe start with a lemma giving a bound on a typical sum arising in the proof which follows. Similar to above, $\\{ b>|j_k| \\ge a\\}$ will be used to denote $\\{ \\v{j} \\in I_d^l: b>|j_k| \\ge a\\}$. \n\\begin{lemma}\\label{lem:BasicSumBounds}\n\tLet $1\\le \\nu<\\infty$, $N>200 \\nu$, $c=1\/(16\\nu)$, and $\\gamma_N=\\lfloor c N\\rfloor$. If $0<\\alpha\\le 1$, then for any $b \\in \\ell_2^{d,l}$, we have\n\t\\[ \\sum_{S=1}^{\\gamma_N} \\sum_{|j_k|\\ge NS^\\alpha\/2} |b(\\v{j})|^2 \\ \\le\\ 2^{1\/\\alpha} \\sum_{\\v{j} \\in \\mathbb{Z}^d} \\left|\\frac{j_k}{N}\\right|^{1\/\\alpha} |b(\\v{j})|^2,\\]\n\twhere $C_\\alpha$ only depends on $\\alpha$. \n\\end{lemma}\nNote, we will apply this lemma with $\\nu= \\sqrt{B\/A}$ where $A$ and $B$ are Riesz basis bounds of $G_{d,l}(b)$ for some $b \\in \\ell_2^{d,l}$. However, this lemma holds regardless of whether $G_{d,l}(b)$ is basis for $\\ell_2^{d,l}$. \n\\begin{proof}\n\tRearranging terms, we have\n\n\t\\begin{eqnarray}\n\t\\sum_{S=1}^{\\gamma_N} \\sum_{|j_k|\\ge NS^\\alpha\/2} |b(\\v{j})|^2 = \\sum_{m=1}^{\\gamma_N-1} m \\sum_{\\frac{N(m+1)^\\alpha}{2}> |j_k|\\ge \\frac{Nm^\\alpha}{2}} |b(\\v{j})|^2 + \\gamma_N \\sum_{|j_k|\\ge \\frac{N \\cdot \\gamma_N^\\alpha}{2}} |b(\\v{j})|^2. \\label{eqn:CantComeUpWithGoodName}\n\t\\end{eqnarray}\n\n\tNote that for some $m$, if $j_k$ satisfies $|j_k|\\ge \\frac{Nm^\\alpha}{2}$, then $m \\le 2^{1\/\\alpha} \\left| \\frac{j_k}{N}\\right|^{1\/\\alpha}$.\n\tThen, from \\eqref{eqn:CantComeUpWithGoodName}, we find\n\t\\begin{eqnarray*}\n\t\t\\sum_{S=1}^{\\gamma_N} \\sum_{|j_k|\\ge NS^\\alpha\/2} |b(\\v{j})|^2 &\\le& 2^{1\/\\alpha}\\sum_{m=1}^{\\gamma_N-1} \\sum_{\\frac{N(m+1)^\\alpha}{2}> |j_k|\\ge \\frac{Nm^\\alpha}{2}} \\left| \\frac{j_k}{N}\\right|^{1\/\\alpha}|b(\\v{j})|^2 \\\\ &\\ +& 2^{1\/\\alpha} \\sum_{|j_k|\\ge \\frac{N \\gamma_N^\\alpha}{2}} \\left| \\frac{j_k}{N}\\right|^{1\/\\alpha}|b(\\v{j})|^2\\\\\n\t\t&\\le& 2^{1\/\\alpha}\\sum_{\\v{j} \\in I_d^l} \\left |\\frac{j_k}{N}\\right|^{1\/\\alpha} |b(\\v{j})|^2.\n\t\\end{eqnarray*} \n\\end{proof}\n\n\n\n\n\n\\begin{proof}[Theorem \\ref{thm:NonsymFiniteBLTwithInfinity}]\n\tWe prove the result for $k=1$. We treat the case where $p$ and $q$ are both finite and the case where one of these is infinite separately. Below, we take $\\tau=\\frac{1}{p}+\\frac{1}{q}$. \n\t\n\t\\textbf{Case 1: $1\\le p, q <\\infty$}. Let $S$ be an integer satisfying $1\\le S \\le \\gamma_N$ where $\\gamma_N = \\lfloor (N\/16)\\sqrt{A\/B}\\rfloor$, and $R= \\lceil S^{1\/p}\\rceil$, $Q=\\lceil S^{1\/q}\\rceil$ if $11\n\t\\end{cases},\n\t\\end{equation}\n\twhere the constants $C_{\\tau,A,B}$ depend only on $\\tau$, $A$, and $B$. \n\t\n\t\n\t\n\t\\textbf{Case 2: One of $p$ or $q$ is $\\infty$}. We can assume without loss of generality that $q=\\infty$ and $1\\le p<\\infty$. With this in mind, assume $b$ generates an $A,B$-Gabor Riesz basis for $\\ell_2^{d,l}$, and further suppose $\\mathcal{F}_{d,l}(b)$ is supported in the set $(-\\gamma_N N\/2, \\gamma_N N\/2)\\cap \\mathbb{Z}$. Then, Theorem \\ref{thm:FiniteQuantBLTHD} applied with $Q=\\gamma_N$, gives \n\t\\[ \\frac{C}{R\\gamma_N} \\ \\le\\ \\frac{1}{N^l} \\sum_{|j_k|\\ge \\frac{NR}{2}} |b(\\v{j})|^2,\\]\n\twhere the second sum does not appear due to the support condition on $\\mathcal{F}_{d,l}(b)$. As in part (i), let $1 \\le S \\le \\gamma_N$ and $R=\\lceil S^{1\/\\alpha}\\rceil$ if $1 0$, $b_1 > 0$, and $W_t\\sim i.i.d.~N(0,1)$. In other words, a potentially qualified transformation related to the GARCH(1,1) or ARCH($\\infty$) model can be exhibited as:\n\\begin{equation}\n W_t = \\frac{Y_t}{\\sqrt{a + a_1Y_{t-1}^2 + b_1\\sigma_{t-1}^2}} \\label{eq:3.2}\n\\end{equation}\nHowever, recall the core insight of the NoVaS method is connecting the original data with the transformed data by a qualified transformation function. A primary problem is desired to be solved is that the right-hand side of \\cref{eq:3.2} contains other terms rather than only $\\{Y_t\\}$ terms. Thus, more manipulations are required to build the GA-NoVaS method. Taking \\cref{4e1} as the starting point, we first find out expressions of $\\sigma_{t-1}^2,\\sigma_{t-2}^2,\\cdots$ as follow:\n\\begin{equation}\n \\begin{split}\n \\sigma_{t-1}^2 &= a + a_1Y_{t-2}^2 + b_1\\sigma_{t-2}^2\\\\\n \\sigma_{t-2}^2 &= a + a_1Y_{t-3}^2 + b_1\\sigma_{t-3}^2\\\\\n \\vdots& \\label{4e2}\n \\end{split}\n\\end{equation}\nPlug all components in \\cref{4e2} into \\cref{4e1}, one equation sequence can be gotten:\n\\begin{equation}\n \\begin{split}\n Y_t &= W_t\\sqrt{a + a_1Y_{t-1}^2 + b_1\\sigma_{t-1}^2}\\\\ \n &= W_t\\sqrt{a + a_1Y_{t-1}^2 + b_1(a + a_1Y_{t-2}^2 + b_1\\sigma_{t-2}^2)}\\\\\n &= W_t\\sqrt{a + a_1Y_{t-1}^2 + b_1a + b_1a_1Y_{t-2}^2 + b_1^2(a + a_1Y_{t-3}^2 + b_1\\sigma_{t-3}^2)}\\\\\n &\\vdots \\label{4e3}\n \\end{split}\n\\end{equation}\nIterating the process in \\cref{4e3}, with the requirement of $a_1+b_1<1$ for the stationarity, the limiting form of $Y_t$ can be written as \\cref{4e4}:\n\\begin{equation}\n Y_t =W_t\\sqrt{ \\sum_{i = 1}^{\\infty}a_1b_1^{i-1}Y_{t-i}^2 + \\sum_{j=0}^{\\infty}ab_1^j} = W_t\\sqrt{ \\sum_{i = 1}^{\\infty}a_1b_1^{i-1}Y_{t-i}^2 + \\frac{a}{1-b_1}} \\label{4e4}\n\\end{equation}\nWe can rewrite \\cref{4e4} to get a potential function $H_n$ which is corresponding to the GA-NoVaS method:\n\\begin{equation}\n W_t = \\frac{Y_t}{\\sqrt{ \\sum_{i = 1}^{\\infty}a_1b_1^{i-1}Y_{t-i}^2 + \\frac{a}{1-b_1}}} \\label{4e5}\n\\end{equation}\nRecall the adjustment taken in the existing GE-NoVaS method, the total difference between \\cref{3.2e2,3.2e3} can be seen as the term $a$ being replaced by $\\alpha s_{t-1}^2 + \\beta Y_t^2$. Apply this same adjustment on \\cref{4e5}, then this equation will be changed to the form as follows:\n\\begin{equation}\n W_t = \\frac{Y_t}{\\sqrt{ \\frac{\\beta Y_t^2 + \\alpha s_{t-1}^2}{1-b_1}+ \\sum_{i = 1}^{\\infty}a_1b_1^{i-1}Y_{t-i}^2 }} = \\frac{Y_t}{\\sqrt{ \\frac{\\beta Y_t^2}{1-b_1}+ \\frac{\\alpha s_{t-1}^2}{1-b_1} + \\sum_{i = 1}^{\\infty}a_1b_1^{i-1}Y_{t-i}^2 }} \\label{4e6}\n\\end{equation}\n In \\cref{4e6}, since $\\alpha\/(1-b_1)$ is also required to take a small positive value, this term can be seen as a $\\Tilde{\\alpha}$ ($\\Tilde{\\alpha} \\geq 0$) which is equivalent with $\\alpha$ in the existing GE-NoVaS method. Thus, we can simplify $\\alpha s_{t-1}^2\/(1-b_1)$ to $\\Tilde{\\alpha} s_{t-1}^2$. For keeping the same notation style with the GE-NoVaS method, we use $\\alpha s_{t-1}^2$ to represent $\\alpha s_{t-1}^2\/(1-b_1)$. Then \\cref{4e6} can be represented as:\n \\begin{equation}\n W_t = \\frac{Y_t}{\\sqrt{ \\frac{\\beta Y_t^2}{1-b_1}+ \\alpha s_{t-1}^2 + \\sum_{i = 1}^{\\infty}a_1b_1^{i-1}Y_{t-i}^2 }} \\label{4e7}\n\\end{equation}\n For getting a qualified GA-NoVaS transformation, we still need to make the transformation function \\cref{4e7} satisfy the requirement of the Model-free Prediction Principle. Recall that in the existing GE-NoVaS method, $\\alpha + \\beta + \\sum_{i=1}^pa_i$ in \\cref{3.2e3} is restricted to be 1 for meeting the requirement of variance-stabilizing and the optimal combination of $\\alpha,\\beta, a_1,\\cdots,a_p$ is selected to make the empirical distribution of $\\{W_t\\}$ as close to the standard normal distribution as possible (i.e., minimizing $\\abs{KURT(W_t)-3}$). Similarly, for getting a qualified $H_n$ from \\cref{4e7}, we require:\n \\begin{equation}\n \\frac{\\beta}{1-b_1} +\\alpha + \\sum_{i=1}^{\\infty}a_1b_1^{i-1} = 1 \\label{4e8}\n \\end{equation}\n Under this requirement, since $a_1$ and $b_1$ are both less than 1, $a_1b_1^{i-1}$ will converge to 0 as $i$ converges to $\\infty$, i.e., $a_1b_1^{i-1}$ is neglectable when $i$ takes large values. So it is reasonable to replace $\\sum_{i=1}^{\\infty}a_1b_1^{i-1}$ in \\cref{4e8} by $\\sum_{i=1}^{q}a_1b_1^{i-1}$, where $q$ takes a large value. Then a truncated form of \\cref{4e7} can be written as \\cref{4e9}:\n\\begin{equation}\n W_t = \\frac{Y_t}{\\sqrt{ \\frac{\\beta Y_t^2}{1-b_1}+ \\alpha s_{t-1}^2 + \\sum_{i = 1}^{q}a_1b_1^{i-1}Y_{t-i}^2 }}~;~\\text{for}~ t=q+1,\\cdots,n. \\label{4e9}\n\\end{equation}\nNow, we take \\cref{4e9} as a potential function $H_n$. Then, the requirement of variance-stabilizing is changed to:\n\\begin{equation}\n \\frac{\\beta}{1-b_1} +\\alpha + \\sum_{i=1}^{q}a_1b_1^{i-1} = 1\\label{4e10}\n\\end{equation}\n\\\\\nAkin to \\cref{3.2e6}, we scale $\\{\\frac{\\beta}{1-b_1},a_1,a_1b_1$ $,a_1b_1^{2},$ $\\cdots,a_1b_1^{q-1} \\}$ of \\cref{4e10} by timing a scalar $\\frac{1-\\alpha}{\\frac{\\beta}{1-b_1} + \\sum_{i=1}^{q}a_1b_1^{i-1}}$, and then search optimal coefficients. For presenting \\cref{4e9} with scaling coefficients in a concise form, we use $\\{c_0,c_1,\\cdots,c_q\\}$ to represent $\\{\\frac{\\beta}{1-b_1},a_1,a_1b_1$ $,a_1b_1^{2},$ $\\cdots,a_1b_1^{q-1} \\}$ after scaling, which implies that we can rewrite \\cref{4e9} as:\n\\begin{equation}\n W_t = \\frac{Y_t}{\\sqrt{ c_0Y_t^2+ \\alpha s_{t-1}^2 + \\sum_{i = 1}^{q}c_iY_{t-i}^2 }}~;~\\text{for}~ t=q+1,\\cdots,n. \\label{4e9v}\n\\end{equation}\n\\begin{remark}(The difference between GA-NoVaS and GE-NoVaS methods)\nCompared with the existing GE-NoVaS method, we should notice that the GA-NoVaS method possesses a totally different transformation structure. Recall all coefficients except $\\alpha$ implied by the GE-NoVaS method are expressed as $\\beta = c', a_i = c'e^{-ci}~$ $\\text{for all}~1\\leq$ $i\\leq p$, $c' = \\frac{1-\\alpha}{\\sum_{j=0}^pe^{-cj}}$. There are only two free parameters $c$ and $\\alpha$. However, there are four free parameters $\\beta, a_1, b_1$ and $\\alpha$ in \\cref{4e9}. For example, the coefficient of $Y_t^2$ of the GE-NoVaS method is $(1-\\alpha)\/(\\sum_{j=0}^pe^{-cj})$. On the other hand, the corresponding coefficient in the GA-NoVaS structure is $c_0 = \\beta(1-\\alpha)\/(\\beta+(1-b_1)\\sum_{i=1}^{q}a_1b_1^{i-1})$. We can think the freedom of coefficients within the GA-NoVaS is larger than the freedom in the GE-NoVaS. At the same time, the structure of GA-NoVaS method is built from GARCH(1,1) model directly without imposing any prior assumption on coefficients. We believe this is the reason why our GA-NoVaS method shows better prediction performance in \\cref{sec:simu,sec:real data}. \n\\end{remark}\n\nFurthermore, for achieving the aim of normalizing, we still fix $\\alpha$ to be one specific value from $\\{0.1,0.2,\\cdots,0.8\\}$, and then search the optimal combination of $\\beta,a_1,b_1$ from three grids of possible values of $\\beta,a_1,b_1$ to minimize $\\abs{KURT(W_t)-3}$. After getting a qualified $H_n$, $H_n^{-1}$ will be outlined immediately:\n\\begin{equation}\n Y_t = \\sqrt{\\frac{W_t^2}{1-c_0W_t^2}(\\alpha s_{t-1}^2+\\sum_{i=1}^qc_iY_{t-i}^2)}~;~\\text{for}~ t=q+1,\\cdots,n. \\label{4e11}\n\\end{equation}\nBased on \\cref{4e11}, $Y_{n+1}$ can be expressed as the equation follows:\n\\begin{equation}\n Y_{n+1} = \\sqrt{\\frac{W_{n+1}^2}{1-c_0W_{n+1}^2}(\\alpha s_{n}^2+\\sum_{i=1}^qc_iY_{n+1-i}^2)} \\label{4e12}\n\\end{equation}\nAlso, it is not hard to express $Y_{n+h}$ as a function of $W_{n+1},\\cdots, W_{n+h} $ and $\\mathscr{F}_{n}$ with GA-NoVaS method like we did in \\cref{ssec:genovas}:\n\\begin{equation}\n Y_{n+h} = f_{GA}(W_{n+1},\\cdots,W_{n+h};\\mathscr{F}_{n})~;~\\text{for any}~h\\geq 1. \\label{4e13}\n\\end{equation}\n\\par\n\\noindent Once the expression of $Y_{n+h}$ is figured out, we can apply the same procedure with the GE-NoVaS method to get the optimal predictor of $Y_{n+h}$ under $L_1$ or $L_2$ risk criterion. To deal with $\\alpha$, we still adopt the same strategy used in the GE-NoVaS method, i.e., select the optimal $\\alpha$ from a grid of possible values based on prediction performance. One thing should be noticed is that the value of $\\alpha$ is invariant during the process of optimization once we fix it as a specific value. More details about the algorithm of this new method can be found in \\cref{ssc:algorithm}.\n\n\\subsection{Parsimonious variant of the GA-NoVaS method}\\label{subsec:parsimoniousvariant}\nAccording to the $\\beta$-removing idea, we can continue proposing the GA-NoVaS-without-$\\beta$ method which is a parsimonious variant of the GA-NoVaS method. From \\cite{wu2021boosting}, functions $H_n$ and $H_n^{-1}$ corresponding to the GE-NoVaS-without-$\\beta$ method can be presented as follow:\n\\begin{equation}\n W_{t}=\\frac{Y_t}{\\sqrt{\\alpha s_{t-1}^2+\\sum_{i=1}^pa_iY_{t-i}^2}}~;~Y_t=\\sqrt{W_{t}^2(\\alpha s_{t-1}^2+\\sum_{i=1}^pa_iY_{t-i}^2)}~;~\\text{for}~ t=p+1,\\cdots,n. \\label{eq:3e15} \n\\end{equation}\n\\cref{eq:3e15} still need to satisfy the requirement of normalizing and variance-stabilizing transformation. Therefore, we restrict $\\alpha + \\sum_{i=1}^pa_i = 1$ and still select the optimal combination of $ a_1,\\cdots,a_p$ by minimizing $\\abs{KURT(W_t)-3}$. Then, $Y_{n+1}$ can be expressed by \\cref{eq:3e16}:\n\\begin{equation}\n Y_{n+1}=\\sqrt{W_{n+1}^2(\\alpha s_{n}^2+\\sum_{i=1}^pa_iY_{n+1-i}^2)} \\label{eq:3e16}\n\\end{equation}\n\\begin{remark}Even though we do not include the effect of $Y_t$ when we build $H_n$, the expression of $Y_{n+1}$ still contains the current value $Y_n$. It means the GE-NoVaS-without-$\\beta$ method does not disobey the rule of causal prediction.\n\\end{remark}\n\nSimilarly, our proposed GA-NoVaS method can also be offered in a variant without $\\beta$ term. \\cref{4e9v,4e11} without $\\beta$ term can be represented by following equations:\n\\begin{equation}\n W_t = \\frac{Y_t}{\\sqrt{ \\alpha s_{t-1}^2 + \\sum_{i = 1}^{q}\\Tilde{c}_iY_{t-i}^2 }}~;~Y_t = \\sqrt{W_t^2(\\alpha s_{t-1}^2+\\sum_{i=1}^q\\Tilde{c}_iY_{t-i}^2)} \\label{4e21}\n\\end{equation}\nOne thing should be mentioned here is that $\\{\\Tilde{c}_1,\\cdots,\\Tilde{c}_q\\}$ represents $\\{a_1,a_1b_1$ $,a_1b_1^{2},$ $\\cdots,a_1b_1^{q-1} \\}$ scaled by timing a scalar $\\frac{1-\\alpha}{\\sum_{j=1}^{q}a_1b_1^{j-1}}$. Besides, $\\alpha + \\sum_{i=1}^{q}\\Tilde{c}_i = 1$ is required to satisfy the variance-stabilizing requirement and the optimal combination of $a_1,b_1$ is selected by minimizing $\\abs{KURT(W_t)-3}$ to satisfy the normalizing requirement. For GE-NoVaS- and GA-NoVaS-without-$\\beta$ methods, we can still express $Y_{n+h}$ as a function of $\\{W_{n+1},\\cdots,W_{n+h}\\}$ and repeat the aforementioned procedure to get $L_1$ and $L_2$ predictors. For example, we can derive the expression of $Y_{n+h}$ using the GA-NoVaS-without-$\\beta$ method:\n\n\\begin{equation}\n Y_{n+h} = f_{\\text{GA-without-$\\beta$}}(W_{n+1},\\cdots,W_{n+h};\\mathscr{F}_{n})~;~\\text{for any}~h\\geq 1. \\label{4e22}\n\\end{equation}\n\n\\begin{remark}[Slight computational efficiency of removing $\\beta$]\\label{remark3.2}\nNote that the suggestion of removing $\\beta$ can also lead a less time-complexity of the existing GE-NoVaS and newly proposed GA-NoVaS methods. The reason for this is simple: Recall $1\/\\sqrt{\\beta}$ is required to be larger or equal to 3 for making $\\{W_t\\}$ have enough large range, i.e., $\\beta$ is required to be less or equal to 0.111. However, the optimal combination of NoVaS coefficients may not render a suitable $\\beta$. For this situation, we need to increase the time-series order ($p$ or $q$) and repeat the normalizing and variance-stabilizing process till $\\beta$ in the optimal combination of coefficients is appropriate. This replication process definitely increases the computation workload.\n\\end{remark}\n\n\\subsection{Connection of two parsimonious methods}\\label{ssec:connection}\nIn this subsection, we reveal that GE-NoVaS-without-$\\beta$ and GA-NoVaS-without-$\\beta$ methods actually have a same structure. The difference between these two methods lies in the region of free parameters. For observing this phenomenon, let us consider scaled coefficients of GA-NoVaS-without-$\\beta$ method except $\\alpha$:\n\\begin{equation}\n\\left\\{ \\frac{(1-\\alpha)b_1^{i-1}}{\\sum_{j=1}^{q}b_1^{j-1}}\\right\\}_{i=1}^{q} =\\left\\{ \\frac{(1-\\alpha)b_1^{i}}{\\sum_{j=1}^{q}b_1^{j}}\\right\\}_{i=1}^{q} \\label{eq:3.19}\n\\end{equation}\nRecall parameters of GE-NoVaS-without-$\\beta$ method except $\\alpha$ implied by \\cref{3.2e6} are:\n\\begin{equation}\n \\left\\{ \\frac{(1-\\alpha)e^{-ci}}{\\sum_{j=1}^pe^{-cj}} \\right\\}_{i=1}^{p} \\label{eq:3.20}\n\\end{equation}\n\nObserving above two equations, although we can discover that \\cref{eq:3.19} and \\cref{eq:3.20} are equivalent if we set $b_1$ being equal to $e^{-c}$, these two methods are still slightly different since regions of $b_1$ and $c$ play a role in the process of optimization. The complete region of $c$ could be $(0,\\infty)$. However, \\cite{politis2015modelfreepredictionprinciple} pointed out that $c$ can not take a large value\\footnote{When $c$ is large, $a_i \\approx 0$ for all $i > 0$. It is hard to make the kurtosis of transformed series be 3.} and the region of $c$ should be an interval of the type $(0,m)$ for some $m$. In other words, a formidable search problem for finding the optimal $c$ is avoided by choosing such trimmed interval. On the other hand, $b_1$ is explicitly searched from $(0,1)$ which is corresponding with $c$ taking values from $(0,\\infty)$. Likewise, applying the GA-NoVaS-without-$\\beta$ method, the aforementioned burdensome search problem is also eliminated. Moreover, we can build a transformation based on the whole available region of unknown parameter. In spite of the fact that GE-NoVaS-without-$\\beta$ and GA-NoVaS-without-$\\beta$ methods have indistinguishable prediction performance for most of data analysis cases, we argue that the GA-NoVaS-without-$\\beta$ method is more stable and reasonable than the GE-NoVaS-without-$\\beta$ method since it is a more complete technique viewing the available region of parameter. Moreover, GA-NoVaS-without-$\\beta$ method achieves significantly better prediction performance for some cases, see more details from \\hyperref[appendix:a]{Appendix A}.\n\n\\subsection{Algorithms of new methods}\\label{ssc:algorithm}\n\\noindent In \\cref{ssc:garchnovas,subsec:parsimoniousvariant}, we exhibit the GA-NoVaS method and its parsimonious variant. In this section, we provide algorithms of these two methods. For the GA-NoVaS method, unknown parameters $\\beta, a_1, b_1$ are selected from three grids of possible values to normalize $\\{W_t;~t = q+1,\\cdots,n\\}$ in \\cref{4e9}. If our goal is the $h$-step ahead prediction of $g(Y_{n+h})$ using past $\\{Y_t;~t=1,\\cdots,n\\}$, the algorithm of the GA-NoVaS method can be summarized in \\cref{algori1}.\n\n\\begin{algorithm}[htbp]\n\\caption{the $h$-step ahead prediction for the GA-NoVaS method}\n\\centering\n\\label{algori1}\n \\centering\n \\begin{tabular} {p{29pt}p{280pt}} \n Step 1 & Define a grid of possible $\\alpha$ values, $\\{\\alpha_k;~ k = 1,\\cdots,K\\}$, three grids of possible $\\beta$, $a_1$, $b_1$ values. Fix $\\alpha = \\alpha_k$, then calculate the optimal combination of $\\beta,a_1,b_1$ of the GA-NoVaS method.\\\\\n Step 2 & Derive the analytic form of \\cref{4e13} using $\\{\\beta, a_1, b_1, \\alpha_k\\}$ from the first step.\\\\\n Step 3 & Generate $\\{W_{n+1,m},\\cdots, W_{n+h,m}\\}_{m=1}^{M}$ from a trimmed standard normal distribution or empirical distribution $\\hat{F}_w$. Plug $\\{W_{n+1,m},\\cdots, W_{n+h,m}\\}_{m=1}^{M}$ into the analytic form of \\cref{4e13} to obtain $M$ pseudo-values $\\{Y_{n+h,m}\\}_{m=1}^{M}$.\\\\\n Step 4 & Calculate the optimal predictor $g(\\hat{Y}_{n+h})$ by taking the sample mean (under $L_2$ risk criterion) or sample median (under $L_1$ risk criterion) of the set $\\{g(Y_{n+h,1}),\\cdots,g(Y_{n+h,M})\\}$.\\\\\n Step 5 & Repeat above steps with different $\\alpha$ values from $\\{\\alpha_k;~ k = 1,\\cdots,K\\}$ to get $K$ prediction results. \\\\ \n \\end{tabular}\n\\end{algorithm}\nIf we want to apply the GA-NoVaS-without-$\\beta$ method, we just need to change \\cref{algori1} a little bit. The difference between \\cref{algori1,algori2} is the optimization of $\\beta$ term being removed. The optimal combination of $a_1,b_1$ is still selected based on the normalizing and variance-stabilizing purpose. In our experiment setting, we choose regions of $\\beta,a_1,b_1$ being $(0,1)$ and set a 0.02 grid interval to find all parameters. Besides, for the GA-NoVaS method, we also make sure that the sum of $\\beta,a_1,b_1$ is less than 1 and the coefficient of $Y_t^{2}$ is the largest one. \\\\ \n\\begin{algorithm}[H]\n\\centering\n\\caption{the $h$-step ahead prediction for the GA-NoVaS-without-$\\beta$}\n\\label{algori2}\n\\hspace{0.5cm}\n \\centering\n \\begin{tabular} {p{29pt}p{280pt}} \n \\centering Step 1 & Define a grid of possible $\\alpha$ values, $\\{\\alpha_k;~ k = 1,\\cdots,K\\}$, two grids of possible $a_1$, $b_1$ values. Fix $\\alpha = \\alpha_k$, then calculate the optimal combination of $a_1,b_1$ of the GA-NoVaS-without-$\\beta$ method.\\\\\n \\centering Steps 2-5 & Same as \\cref{algori1}, but $\\{W_{n+1,m},\\cdots, W_{n+h,m}\\}_{m=1}^{M}$ are plugged into the analytic form of \\cref{4e22} and the standard normal distribution does not need to be truncated.\n \\\\\n \\end{tabular}\n\\end{algorithm}\n\n\n\\section{Simulation}\\label{sec:simu}\n\n\\noindent In simulation studies, for controlling the dependence of prediction performance on the length of the dataset, 16 datasets (2 from each settings) are generated from different GARCH(1,1)-type models separately and the size of each dataset is 250 (short data mimics 1-year of econometric data) or 500 (large data mimics 2-years of econometric data).\n\\\\\n\\\\\n\\textbf{Model 1:} Time-varying GARCH(1,1) with Gaussian errors\\\\\n$X_t = \\sigma_t\\epsilon_t,~\\sigma_t^2 = \\omega_{0,t} + \\beta_{1,t}\\sigma_{t-1}^2+\\alpha_{1,t}X_{t-1}^2,~\\{\\epsilon_t\\}\\sim i.i.d.~N(0,1)$\\\\\n$g_t = t\/n; \\omega_{0,t}= -4sin(0.5\\pi g_t)+5; \\alpha_{1,t} = -1(g_t-0.3)^2 + 0.5; \\beta_{1,t} = 0.2sin(0.5\\pi g_t)+0.2,~n = 250~\\text{or}~500$\\\\\n\\textbf{Model 2:} Another time-varying GARCH(1,1) with Gaussian errors\\\\\n$X_t = \\sigma_t\\epsilon_t,~\\sigma_t^2 = 0.00001 + \\beta_{1,t}\\sigma_{t-1}^2+\\alpha_{1,t}X_{t-1}^2,~\\{\\epsilon_t\\}\\sim i.i.d.~N(0,1)$\\\\\n$g_t = t\/n$; $\\alpha_{1,t} = 0.1 - 0.05g_t$; $\\beta_{1,t} = 0.73 + 0.2g_t,~n = 250~\\text{or}~500$\\\\\n\\textbf{Model 3:} Standard GARCH(1,1) with Gaussian errors\\\\\n$X_t = \\sigma_t\\epsilon_t,~\\sigma_t^2 = 0.00001 + 0.73\\sigma_{t-1}^2+0.1X_{t-1}^2,~\\{\\epsilon_t\\}\\sim i.i.d.~N(0,1)$\\\\\n\\textbf{Model 4:} Standard GARCH(1,1) with Gaussian errors\\\\\n$X_t = \\sigma_t\\epsilon_t,~\\sigma_t^2 = 0.00001 + 0.8895\\sigma_{t-1}^2+0.1X_{t-1}^2,~\\{\\epsilon_t\\}\\sim i.i.d.~N(0,1)$\\\\\n\\textbf{Model 5:} Standard GARCH(1,1) with Student-$t$ errors\\\\\n$X_t = \\sigma_t\\epsilon_t,$ $~\\sigma_t^2 = 0.00001 + 0.73\\sigma_{t-1}^2+0.1X_{t-1}^2,$\\\\ $~\\{\\epsilon_t\\}\\sim i.i.d.~t$ $\\text{distribution with five degrees of freedom}$\\\\\n\\textbf{Model 6:} Exponential GARCH(1,1) with Gaussian errors\\\\\n$X_t = \\sigma_t\\epsilon_t,~\\log(\\sigma_t^2) = 0.00001 + 0.8895\\log(\\sigma^2_{t-1})+0.1\\epsilon_{t-1}+0.3(\\abs{\\epsilon_{t-1}}-E\\abs{\\epsilon_{t-1}}),$\\\\$~\\{\\epsilon_t\\}\\sim i.i.d.~N(0,1)$\\\\\n\\textbf{Model 7:} GJR-GARCH(1,1) with Gaussian errors\\\\\n$X_t = \\sigma_t\\epsilon_t,~\\sigma_t^2 = 0.00001 + 0.5\\sigma^2_{t-1}+0.5X_{t-1}^2-0.5I_{t-1}X_{t-1}^2,~\\{\\epsilon_t\\}\\sim i.i.d.~N(0,1)\\\\\nI_{t} = 1~\\text{if}~ X_t \\leq 0; I_{t} = 0~ \\text{otherwise}$\\\\\n\\textbf{Model 8:} Another GJR-GARCH(1,1) with Gaussian errors\\\\\n$X_t = \\sigma_t\\epsilon_t,~\\sigma_t^2 = 0.00001 + 0.73\\sigma^2_{t-1}+0.1X_{t-1}^2+0.3I_{t-1}X_{t-1}^2,~\\{\\epsilon_t\\}\\sim i.i.d.~N(0,1)\\\\\nI_{t} = 1~\\text{if}~ X_t \\leq 0; I_{t} = 0~ \\text{otherwise}$\\\\\n\n\\textit{Model description:} Models 1 and 2 present a time-varying GARCH model where coefficients $a_0, a_1, b_1$ change over time slowly. They differ significantly in the intercept term of $\\sigma_t^2$ as we intentionally keep it low in the second setting. Models 3 and 4 are from a standard GARCH where in Model 4 we wanted to explore a scenario that $\\alpha_1+\\beta_1$ is very close to 1 and thus mimic what would happen for the iGARCH situation. Model 5 allows for the error distribution to come from a student-$t$ distribution instead of the Gaussian distribution. Note that, for a fair competition, we chose Models 2 to 5 same as simulation settings of \\citep{chen2019optimal}. Models 6, 7 and 8 present different types of GARCH models. These settings allow us to check robustness of our methods against model misspecification. In a real world, it is hard to convincingly say if the data obeys one particular type of GARCH model, so we want to pursue this exercise to see if our methods are satisfactory no matter what the underlying distribution and the GARCH-type model are. This approach to test the performance of a method under model misspecification is quite standard, see \\cite{olubusoye2016misspecification} used data generated from a specifically true model to estimate other GARCH-type models and test the forecasting performance, and \\cite{bellini2008misspecification} investigated the impact of misspecification of innovations in fitting GARCH models.\n\n\\textit{Window size:} Using these datasets, we perform 1-step, 5-steps and 30-steps ahead time-aggregated POOS predictions. For measuring different methods' prediction performance on larger datasets (i.e., data size is 500), we use 250 data as a window to do predictions and roll this window through the whole dataset. For evaluating different methods' performance on smaller datasets (i.e., data size is 250), we use 100 data as a window. \n\nNote that log-returns can be calculated from equation shown below:\n\\begin{equation}\n Y_t = 100\\times \\log(X_{t+1}\/X_t) ~;~\\text{for}~ t = 1,\\cdots,499~\\text{or}~t = 1,\\cdots,249. \\label{Eq:4.1}\n\\end{equation}\nWhere, $\\{X_t\\}_{t = 1}^{250}$ and $\\{X_t\\}_{t = 1}^{500}$ are 1-year and 2-years price series, respectively. Next, we can define time-aggregated predictions of squared log-returns as:\n\\begin{equation}\n\\begin{split}\n \\bar{Y}_{k,1}^2 = \\hat{Y}_{k+1}^2,~k=250,\\cdots,498 ~\\text{or}~k=100,\\cdots,248\\\\\n \\bar{Y}_{i,5}^2 = \\frac{1}{5}\\sum_{m=1}^5\\hat{Y}^2_{i+m},~i = 250,\\cdots,494~\\text{or}~i=100,\\cdots,244\\\\\n \\bar{Y}_{j,30}^2 = \\frac{1}{30}\\sum_{m=1}^{30}\\hat{Y}^2_{j+m},~j = 250,\\cdots,469~\\text{or}~j=100,\\cdots,219 \\label{4e17}\n\\end{split}\n\\end{equation}\nIn \\cref{4e17}, $\\hat{Y}_{k+1}^2,\\hat{Y}_{i+m}^2,\\hat{Y}_{j+m}^2$ are single point predictions of realized squared log-returns by NoVaS-type methods or benchmark method; $\\bar{Y}_{k,1}^2$, $\\bar{Y}_{i,5}^2$ and $\\bar{Y}_{j,30}^2$ represent 1-step, 5-steps and 30-steps ahead aggregated predictions, respectively. More specifically, for exploring the performance of three different prediction lengths with large data size, we roll the 250 data points window through the whole dataset, i.e., use $\\{Y_1,\\cdots,Y_{250}\\}$ to predict $Y_{251}^2,\\{Y_{251}^2,\\cdots,Y_{255}^2\\}$ and $\\{Y_{251}^2,\\cdots,Y_{280}^2\\}$; then use $\\{Y_2,\\cdots,Y_{251}\\}$ to predict $Y_{252}^2,\\{Y_{252}^2,\\cdots,Y_{256}^2\\}$ and $\\{Y_{252}^2,\\cdots,Y_{281}^2\\}$, for 1-step, 5-steps and 30-steps aggregated predictions respectively, and so on. For exploring the performance of three different prediction lengths with small data size, we roll the 100 data points window through the whole dataset, i.e., use $\\{Y_1,\\cdots,Y_{100}\\}$ to predict $Y_{101}^2,\\{Y_{101}^2,\\cdots,Y_{105}^2\\}$ and $\\{Y_{101}^2,\\cdots,Y_{130}^2\\}$; then use $\\{Y_2,\\cdots,Y_{101}\\}$ to predict $Y_{102}^2,\\{Y_{102}^2,\\cdots,Y_{106}^2\\}$ and $\\{Y_{102}^2,\\cdots,Y_{131}^2\\}$, for 1-step, 5-steps and 30-steps aggregated predictions respectively, and so on. For example, with window size being 30, we perform time-aggregated predictions on a large dataset 220 times. Taking this strategy, we can exhaust the information contained in the dataset and investigate the forecasting performance continuously. \n\nTo measure different methods' forecasting performance, we compare predictions with realized values based on \\cref{eq:4.1}. \n\\begin{equation}\n P = \\sum_{l}(\\bar{Y}_{l,h}^2-\\sum_{m=1}^h(Y_{l+m}^2\/h))^2~;~l \\in \\{k,i,j\\}\\label{eq:4.1}\n\\end{equation}\nIn \\cref{eq:4.1}, setting $l = k,i,j$ means we consider 1-step, 5-steps and 30-steps ahead time-aggregated predictions respectively; $\\bar{Y}_{l,h}^2$ is the $h$-step ($h\\in\\{1,5,30\\}$) ahead time-aggregated volatility prediction, defined in \\cref{4e17}; $\\sum_{m=1}^h(Y_{l+m}^2\/h)$ is the corresponding true aggregated value calculated from realized squared log-returns. For comparing various Model-free methods with the traditional method, we set the benchmark method as fitting one GARCH(1,1) model directly (GARCH-direct).\n\n\\textit{Different variants of methods:} Note that we can perform GE-NoVaS-type and GA-NoVaS-type methods to predict $Y_{n+h}$ by generating $\\{W_{n+1,m},\\cdots, W_{n+h,m}\\}_{m=1}^{M}$ from a standard normal distribution or the empirical distribution of $\\{W_t\\}$ series, then we can calculate the optimal predictor based on $L_1$ or $L_2$ risk criterion. It means each NoVaS-type method possesses four variants. \n\nWhen we perform POOS forecasting, we do not know which $\\alpha$ is optimal. Thus, we perform every NoVaS variants using $\\alpha$ from eight potential values $\\{0.1, 0.2, \\cdots,0.8\\}$ and then pick the optimal result. For simplifying the presentation, we further select the final prediction from optimal results of four variants of a NoVaS method and use this result to be the best prediction to which each NoVaS method can reach. Applying this procedure means we take a computationally heavy approach to compare different methods' potentially best performance. However, it also means we want to challenge newly proposed methods at a maximum level, so as to see if they can beat even the best-performing scenario of the current GE-NoVaS method. \n\n\\subsection{Simulation results}\\label{ssec:simuresults}\n\\noindent In this subsection, we compare the performance of our new methods (GA-NoVaS and GA-NoVaS-without-$\\beta$) with GARCH-direct and existing GE-NoVaS methods on forecasting 250 and 500 simulated data. Results are tabulated in \\cref{5t1}.\n\n\\subsubsection{Simulation results of Models 1 to 5}\\label{sssec:simuresultsmoeld1-5}\n\\noindent From \\cref{5t1}, we clearly find NoVaS-type methods outperform the GARCH-direct method. Especially for using the 500 Model-1 data to do 30-steps ahead aggregated prediction, the performance of the GARCH-direct method is terrible. NoVaS-type methods are almost 30 times better than the GARCH-direct method. This means that the normal prediction method may be spoiled by error accumulation problem when long-term predictions are required. On the other hand, Model-free methods can avoid this problem.\n\nIn addition to the overall advantage of NoVaS-type methods over GARCH-direct method, we find the GA-NoVaS method is generally better than the GE-NoVaS method for both short and large data. This conclusion is two-fold: (1) The time of the GA-NoVaS being the best method is more than the GE-NovaS method; (2) Since we want to compare the forecasting ability of GE-NoVaS and GA-NoVaS methods, we use $*$ symbol to represent cases where the GA-NoVaS method works at least 10$\\%$ better than the GE-NoVaS method or inversely the GE-NoVaS method is 10$\\%$ better. We can find there is no case to support that the GE-NoVaS works better than GA-NoVaS with as least 10$\\%$ improvement. On the other hand, the GA-NoVaS method achieves significant improvement when long-term predictions are required. Moreover, the GA-NoVaS-without-$\\beta$ dominates other two NoVaS-type methods.\n\n\\subsubsection{Models 6 to 8: Different GARCH specifications}\\label{sssc:simudifferent}\n\\noindent Since the main crux of Model-free methods is how such non-parametric methods are robust to underlying data-generation processes, here we explore other GARCH-type data generations. The GA-NoVaS method is based upon GARCH model, so it is interesting to explore whether even these methods can sustain a different type of true underlying generation and can in general outperform existing methods. Results for Models 6 to 8 are tabulated in \\cref{5t1}.\n\nIn general, NoVaS-type methods still outperform the GARCH-direct method for these cases. Although the forecasting ability of GE-NoVaS and GA-NoVaS for large data is indistinguishable, the GA-NoVaS is obviously better for taking short data size. For example, the GA-NoVaS method brings around 20$\\%$ improvement compared with the GE-NoVaS method for 30-steps ahead aggregated prediction of 250 Model-6 simulated data. Doing better prediction with past data that is shorter in size is always a significant challenge and thus it is valuable to discover the GA-NoVaS method has superior performance for this scenario. Not surprisingly, the GA-NoVaS-without-$\\beta$ method still keeps great performance.\n\n\\subsection{Simulation summary}\\label{ssc:simusmall}\n\\noindent Through deploying simulation data analysis, we find GA-NoVaS-type methods can sustain great performance against short data and model misspecification. Overall, our new methods outperform the GE-NoVaS method and can render notable improvement for some cases when long-term predictions are desired. \n\n\\begin{table}[htbp]\n\\caption{Comparison results of using 500 and 250 simulated data}\n\\label{5t1}\n\\begin{adjustbox}{width=1\\textwidth}\n\\small\n\\begin{tabular}{lcccclcccc}\n \\toprule\n \\textbf{500} size & \\thead{\\small GE} & \\thead{\\small GA} & \\thead{\\small P-GA} & \\thead{\\small GARCH} & \\textbf{250} size & \\thead{\\small GE} & \\thead{\\small GA} & \\thead{\\small P-GA} & \\thead{\\small GARCH}\\\\ \n \\midrule\n\n M1-1step & 0.89258 & 0.88735 & \\textbf{0.84138} & 1.00000 & M1-1step & 0.91538 & 0.9112 & \\textbf{0.83034} & 1.00000 \\\\ [2pt]\n M1-5steps & 0.40603 & 0.40296 & \\textbf{0.40137} & 1.00000 & M1-5steps & 0.49169 & 0.48479 & \\textbf{0.43247} & 1.00000 \\\\[2pt]\n M1-30steps & 0.03368 & 0.03294 & \\textbf{0.03290} & 1.00000 & M1-30steps & 0.25009 & 0.24752 & \\textbf{0.23035} & 1.00000 \\\\[2pt]\n M2-1step & \\textbf{0.95689} & 0.96069 & 0.99658 & 1.00000 & M2-1step & 0.91369 & 0.91574 & \\textbf{0.87614} & 1.00000 \\\\[2pt]\n M2-5steps & 0.89981 & \\textbf{0.89739} & 0.9198 & 1.00000 & M2-5steps & 0.61001 & 0.61094 & \\textbf{0.51712} & 1.00000 \\\\[2pt]\n M2-30steps & 0.63126 & 0.64042 & \\textbf{0.48396} & 1.00000 & M2-30steps & 0.7725 & \\textbf{0.74083} & 0.75251 & 1.00000 \\\\[2pt]\n M3-1step & 0.99938 & 1.00150 & \\textbf{0.98407} & 1.00000 & M3-1step & 0.97796 & 0.96632 & \\textbf{0.93693} & 1.00000 \\\\[2pt]\n M3-5steps & 0.98206 & 0.96088 & \\textbf{0.94073} & 1.00000 & M3-5steps & 0.98127 & \\textbf{0.97897} & 0.99977 & 1.00000 \\\\[2pt]\n M3-30steps & 1.10509 & 1.03683 & \\textbf{0.90855} & 1.00000 & M3-30steps & 1.38353 & \\textbf{0.89001*} & 0.99818 & 1.00000 \\\\[2pt]\n M4-1step & 0.98713 & \\textbf{0.98466} & 0.9964 & 1.00000 & M4-1step & 0.99183 & 0.95698 & \\textbf{0.92811} & 1.00000 \\\\[2pt]\n M4-5steps & 0.95382 & 0.95362 & \\textbf{0.95338} & 1.00000 & M4-5steps & 0.77088 & 0.72882 & \\textbf{0.67894} & 1.00000 \\\\[2pt]\n M4-30steps & 0.75811 & 0.69208 & \\textbf{0.67594} & 1.00000 & M4-30steps & 0.79672 & \\textbf{0.6095*} & 0.81115 & 1.00000 \\\\[2pt]\n M5-1step & 0.96940 & \\textbf{0.94066} & 0.97151 & 1.00000 & M5-1step & 0.83631 & 0.84134 & 0. \\textbf{79075} & 1.00000 \\\\[2pt]\n M5-5steps & 0.84751 & \\textbf{0.72806*} & 0.82747 & 1.00000 & M5-5steps & 0.38296 & 0.38034 & \\textbf{0.35155} & 1.00000 \\\\[2pt]\n M5-30steps & 0.49669 &\\textbf{ 0.24318*} & 0.47311 & 1.00000 & M5-30steps & 0.00199 & 0.002 & \\textbf{0.00194} & 1.00000 \\\\[2pt]\n M6-1step & 1.00175 & 1.00514 & \\textbf{0.93509} & 1.00000 & M6-1step & 0.95939 & 0.96499 & \\textbf{0.93863} & 1.00000 \\\\[2pt]\n M6-5steps & 0.93796 & 0.94249 & \\textbf{ 0.80311} & 1.00000 & M6-5steps & 0.93594 & 0.97101 & \\textbf{0.85851} & 1.00000 \\\\[2pt]\n M6-30steps & 0.50740 & 0.51350 & \\textbf{0.41112} & 1.00000 & M6-30steps & 0.84401 & \\textbf{0.67272*} & 0.7042 & 1.00000 \\\\[2pt]\n M7-1step & 0.98857 & 0.98737 & \\textbf{0.95932} & 1.00000 & M7-1step & 0.84813 & 0.83628 & \\textbf{0.83216} & 1.00000 \\\\[2pt]\n M7-5steps & 0.85539 & 0.85371 & \\textbf{0.85127} & 1.00000 & M7-5steps & 0.50849 & 0.50126 & \\textbf{0.4802} & 1.00000 \\\\[2pt]\n M7-30steps & \\textbf{0.68202} & 0.68314 & 0.71391 & 1.00000 & M7-30steps & 0.06832 & 0.06817 & \\textbf{0.06507} & 1.00000 \\\\[2pt]\n M8-1step & 0.96001 & 0.96463 & \\textbf{0.93452} & 1.00000 & M8-1step & \\textbf{0.79561} & 0.79994 & 0.8334 & 1.00000 \\\\[2pt]\n M8-5steps & 0.97019 & 0.98184 & \\textbf{0.93178} & 1.00000 & M8-5steps & 0.48028 & 0.47244 & \\textbf{0.45665} & 1.00000 \\\\[2pt]\n M8-30steps & \\textbf{0.30593} & 0.31813 & 0.33853 & 1.00000 & M8-30steps & 0.00977 & \\textbf{0.00942} & 0.00983 & 1.00000 \\\\[2pt]\n \\bottomrule\n\\end{tabular}\n\\end{adjustbox}\n\\\\\n\\tiny \\textit{Note:} Column names ``GA'' and ``GE'' represent GE-NoVaS and GA-NoVaS methods, respectively; ``GARCH'' means GARCH-direct method; ``P-GA'' means GA-NoVaS-without-$\\beta$ method. The benchmark is the GARCH-direct method, so numerical values in the table corresponding to GARCH-direct method are 1. Other numerical values are relative values compared to the GARCH-direct method. ``$Mi\\text{-}j$''steps denotes using data generated from the Model $i$ to do $j$ steps ahead time-aggregated predictions. The bold value means that the corresponding method is the optimal choice for this data case. Cell with $*$ means the GA-NoVaS method is at least 10$\\%$ better than the GE-NoVaS method or inversely the GE-NoVaS method is at least 10$\\%$ better.\n\\end{table} \n\n\n\\section{Real-world data analysis}\\label{sec:real data}\n\\noindent From \\cref{sec:simu}, we have found that NoVaS-type methods have great performance on dealing with different simulated datasets. However, no methodological proposal is complete unless one verifies it on several real-world datasets. This section is devoted to explore, in the context of real datasets forecasting, whether NoVaS-type methods can provide good long-term time-aggregated forecasting ability and how our new methods are compared to the existing Model-free method.\n\nFor performing an extensive analysis and subsequently acquiring a convincing conclusion, we use three types of data--stock, index and currency data--to do predictions. Moreover, as done in simulation studies, we apply this exercise on two different lengths of data. For building large datasets (2-years period data), we take new data which come from Jan.2018 to Dec.2019 and old data which come from around 20 years ago, separately. The dynamics of these econometric datasets have changed a lot in the past 20 years and thus we wanted to explore whether our methods are good enough for both old and new data. Subsequently, we challenge our methods using short (1-year period) real-life data. Finally, we also do forecasting using volatile data, i.e., data from Nov. 2019 to Oct. 2020. Note that economies across the world went through a recession due to the COVID-19 pandemic and then slowly recovered during this time-period, typically these sort of situations introduce systematic perturbation in the dynamics of econometric datasets. We wanted to see if our methods can sustain such perturbations or abrupt changes. \n\n\\subsection{Old and new 2-years data}\\label{ssec:realdatanormalperiod2years}\n\\noindent For mimicking the 2-years period data, we adopt several stock datasets with 500 data size to do forecasting. In summary, we still compare different methods' performance on 1-step, 5-steps and 30-steps ahead POOS time-aggregated predictions. Performing the similar procedure as which we did in \\cref{sec:simu}, all results are shown in \\cref{6t1}. We can clearly find NoVaS-type methods still outperform the GARCH-direct method. Additionally, although the GE-NoVaS method is indistinguishable with the GA-NoVaS method, our new method is more robust than the GE-NoVaS method, see the 30-steps ahead prediction of old two-years BAC and MSFT cases. We can also notice that the GA-NoVaS-without-$\\beta$ method is more robust than other two NoVaS methods. The $\\beta$-removing idea proposed by \\cite{wu2021boosting} is substantiated again. \n\nSince the main goal of this article is offering a new type of NoVaS method which has better performance than the GE-NoVaS method for dealing with short and volatile data, we provide more extensive data analysis to support our new methods in next sections. \n\\begin{table}[htbp]\n \\caption{Comparison results of using old and new 2-years data}\n \\label{6t1}\n\\begin{adjustbox}{width=1\\textwidth}\n\\centering\n\\small\n\\begin{tabular}{lcccclcccc}\n \\toprule\n \\thead{Old \\\\2-years} & \\thead{\\small GE} & \\thead{\\small GA} & \\thead{\\small P-GA} & \\thead{\\small GARCH} & \\thead{New \\\\2-years}& \\thead{\\small GE} & \\thead{\\small GA} & \\thead{\\small P-GA} & \\thead{\\small GARCH} \\\\ \n \\midrule\n\n AAPL-1step & 0.99795 & 0.99236 & \\textbf{0.97836} & 1.00000 & AAPL-1step & 0.80150 & \\textbf{0.79899} & 0.79915 & 1.00000 \\\\[2pt]\n AAPL-5steps & 1.04919 & 1.04800 & \\textbf{0.96999} & 1.00000 & AAPL-5steps & 0.41405 & 0.42338 & \\textbf{0.40427} & 1.00000 \\\\[2pt]\n AAPL-30steps & 1.12563 & 1.21986 & \\textbf{0.96174} & 1.00000 & AAPL-30steps & \\textbf{0.13207} & 0.14046 & 0.14543 & 1.00000 \\\\[2pt]\n BAC-1step & \\textbf{0.99889} & 1.00396 & 1.02780 & 1.00000 & BAC-1step & 0.98393 & 0.99164 & \\textbf{0.96542} & 1.00000 \\\\[2pt]\n BAC-5steps & 1.04424 & 1.02185 & \\textbf{0.99399} & 1.00000 & BAC-5steps & 0.98885 & 1.01480 & \\textbf{0.91857} & 1.00000 \\\\[2pt]\n BAC-30steps & 1.32452 & 1.13887\\textbf{*} & 1.00363 & \\textbf{1.00000} & BAC-30steps & 1.14111 & 1.03657 & \\textbf{0.88596} & 1.00000 \\\\[2pt]\n MSFT-1step & 0.98785 & 0.98598 & \\textbf{0.96185} & 1.00000 & MSFT-1step & 0.98405 & 0.98630 & \\textbf{0.96374} & 1.00000 \\\\[2pt]\n MSFT-5steps & 1.00236 & 1.00096 & \\textbf{0.95271} & 1.00000 & MSFT-5steps & 0.65027 & 0.67005 & \\textbf{0.64278} & 1.00000 \\\\[2pt]\n MSFT-30steps & 1.25272 & 1.09881\\textbf{*} & \\textbf{0.88515} & 1.00000 & MSFT-30steps & \\textbf{0.19767} & 0.20060 & 0.21473 & 1.00000 \\\\[2pt]\n MCD-1step & 1.01845 & 1.00789 & \\textbf{0.99005} & 1.00000 & MCD-1step & 0.99631 & 0.99539 & \\textbf{0.98035} & 1.00000 \\\\[2pt]\n MCD-5steps & 1.11249 & 1.07748 & \\textbf{0.97777} & 1.00000 & MCD-5steps & 0.95403 & 0.95327 & \\textbf{0.91317} & 1.00000 \\\\[2pt]\n MCD-30steps & 1.76385 & 1.69757 & \\textbf{0.99418} & 1.00000 & MCD-30steps & 0.75730 & 0.75361 & \\textbf{0.74557} & 1.00000 \\\\[2pt]\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\\\\n \n \\tiny \\textit{Note:} Column names ``GA'' and ``GE'' represent GE-NoVaS and GA-NoVaS methods, respectively; ``GARCH'' means GARCH-direct method; ``P-GA'' means GA-NoVaS-without-$\\beta$ method. The benchmark is the GARCH-direct method, so numerical values in the table corresponding to GARCH-direct method are 1. Other numerical values are relative values compared to the GARCH-direct method. The bold value means that the corresponding method is the optimal choice for this data case. Cell with $*$ means the GA-NoVaS method is at least 10$\\%$ better than the GE-NoVaS method or inversely the GE-NoVaS method is at least 10$\\%$ better.\n\\end{table}\n\n\\subsection{2018 and 2019 1-year data }\\label{ssec:realdatanormalperiod1year}\n\\noindent For challenging our new methods in contrast to other methods for small real-life datasets, we separate every new 2-years period data in \\cref{ssec:realdatanormalperiod2years} to two 1-year period datasets, i.e., separate four new stock datasets to eight samples. We believe evaluating the prediction performance using shorter data is a more important problem and thus we wanted to make our analysis very comprehensive. Therefore, for this exercise, we add 7 index datasets: Nasdaq, NYSE, Small Cap, Dow Jones, S$\\&$P 500 , BSE and BIST; and two stock datasets: Tesla and Bitcoin into our analysis. \n\nFrom \\cref{6t2} which presents prediction results of different methods on 2018 and 2019 stock data, we still observe that NoVaS-type methods outperform GARCH-direct method for almost all cases. Among different NoVaS methods, it is clear that our new methods are superior than the existing GE-NoVaS method. For 30-steps ahead predictions of 2018-BAC data, 2019-MCD and Tesla data, etc, the existing NoVaS method is even worse than the GARCH-direct method. On the other hand, the GA-NoVaS method is more stable than the GE-NoVaS method, e.g., 30$\\%$ improvement is created for the 30-steps ahead prediction of 2018-BAC data. After applying the $\\beta$-removing idea, the GA-NoVaS-without-$\\beta$ significantly beats other methods for almost all cases.\n\nFrom \\cref{6t3} which presents prediction results of different methods on 2018 and 2019 index data, we can get the exactly same conclusion as before. NoVaS-type methods are far superior than the GARCH-direct and our new NoVaS methods outperform the existing GE-NoVaS method. Interestingly, the GE-NoVaS method is again beaten by the GARCH-direct method in some cases, such as 2019-Nasdaq, Smallcap and BIST. On the other hand, new methods still show more stable performance. Compared to the existing GE-NoVaS method, the GA-NoVaS-without-$\\beta$ method creates around 60$\\%$ improvement from the GE-NoVaS method on the 30-steps ahead prediction of 2019-BIST data. In addition, the GA-NoVaS method shows more than 10$\\%$ improvement for all 2018-BSE cases.\n\nCombining results presented in \\cref{6t1,6t2,6t3}, our new methods present better performance than existing GE-NoVaS and GARCH-direct methods on dealing with small and large real-life data. The improvement generated by new methods using shorter sample size (1-year data) is more significant than using larger sample size (2-years data).\n\n\n\\begin{table}[H]\n \\caption{Comparison results of using 2018 and 2019 stock data}\n \\label{6t2}\n \\begin{adjustbox}{width=1\\textwidth}\n\\small\n\\begin{tabular}{lcccclcccc}\n \\toprule \n 2018& \\thead{GE} & \\thead{GA} & \\thead{P-GA} & \\thead{GARCH} & 2019 & \\thead{GE} & \\thead{GA} & \\thead{P-GA} &\\thead{GARCH} \\\\\n \\midrule\n\n MCD-1step & 0.98514 & 0.97887 & \\textbf{0.94412} & 1.00000 & MCD-1step & 0.95959 & 0.96348 & \\textbf{0.94559} & 1.00000 \\\\[2pt]\n MCD-5steps & 1.0272 & 1.02519 & \\textbf{0.88151} & 1.00000 & MCD-5steps & 1.00723 & 1.01169 & \\textbf{0.90602} & 1.00000 \\\\[2pt]\n MCD-30steps & 0.62614 & 0.63992 & \\textbf{0.61153} & 1.00000 & MCD-30steps & 1.05239 & 0.95714 & \\textbf{0.77976} & 1.00000 \\\\[2pt]\n AAPL-1step & 0.92014 & 0.92317 & \\textbf{0.89283} & 1.00000 & AAPL-1step & 0.84533 & \\textbf{0.81326} & 0.81872 & 1.00000 \\\\[2pt]\n AAPL-5steps & 0.84798 & 0.73461\\textbf{*} & \\textbf{0.71233} & 1.00000 & AAPL-5steps & 0.85401 & 0.79254 & \\textbf{0.68792} & 1.00000 \\\\[2pt]\n AAPL-30steps & 0.38612 & \\textbf{0.36324} & 0.37081 & 1.00000 & AAPL-30steps & 0.99043 & 0.99286 & \\textbf{0.72892} & 1.00000 \\\\[2pt]\n BAC-1step & 0.94952 & 0.93842 & 0\\textbf{.92619} & 1.00000 & BAC-1step & 1.04272 & 1.04722 & \\textbf{0.98605} & 1.00000 \\\\[2pt]\n BAC-5steps & 0.83395 & 0.79158 & \\textbf{0.72512} & 1.00000 & BAC-5steps & 1.22761 & 1.20195 & \\textbf{0.95436} & 1.00000 \\\\[2pt]\n BAC-30steps & 1.34367 & 0.90675\\textbf{*} & \\textbf{0.8763} & 1.00000 & BAC-30steps & 1.4502 & 1.41788 & 1.03482 & \\textbf{1.00000} \\\\[2pt]\n MSFT-1step & 0.91705 & \\textbf{0.90936} & 0.95921 & 1.00000 & MSFT-1step & 1.03308 & 1.00101 & \\textbf{0.95347} & 1.00000 \\\\[2pt]\n MSFT-5steps & 0.74553 & 0.74267 & \\textbf{0.74237} & 1.00000 & MSFT-5steps & 1.2234 & 1.18205 & \\textbf{0.95417} & 1.00000 \\\\[2pt]\n MSFT-30steps & 0.6699 & 0.6477 & \\textbf{0.64717} & 1.00000 & MSFT-30steps & 1.2302 & 1.21337 & \\textbf{0.98476} & 1.00000 \\\\[2pt]\n Tesla-1step & 1.00181 & 0.96074 & \\textbf{0.86238} & 1.00000 & Tesla-1step & 1.00428 & 1.01934 & \\textbf{0.98955} & 1.00000 \\\\[2pt]\n Tesla-5steps & 1.20383 & 1.13335 & 1.0156 & \\textbf{1.00000} & Tesla-5steps & 1.0661 & 1.07506 & \\textbf{0.96107} & 1.00000 \\\\[2pt]\n Tesla-30steps & 1.97328 & 1.84871 & 1.25005 & \\textbf{1.00000} & Tesla-30steps & 2.00623 & 1.71782\\textbf{*} & \\textbf{0.84366} & 1.00000 \\\\[2pt]\n Bitcoin-1step & 0.99636 & 1.01731 & \\textbf{0.97734} & 1.00000 & Bitcoin-1step & 0.89929 & 0.88914 & \\textbf{0.87256} & 1.00000 \\\\[2pt]\n Bitcoin-5steps & 1.02021 & 1.1188 & \\textbf{0.93826} & 1.00000 & Bitcoin-5steps & 0.62312 & 0.63075 & \\textbf{0.56789} & 1.00000 \\\\[2pt]\n Bitcoin-30steps & \\textbf{0.86649} & 0.95506 & 0.91364 & 1.00000 & Bitcoin-30steps & 0.00733 & 0.00749 & \\textbf{0.00631} & 1.00000 \\\\[2pt]\n\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\\\\n \\tiny \\textit{Note:} Column names ``GA'' and ``GE'' represent GE-NoVaS and GA-NoVaS methods, respectively; ``GARCH'' means GARCH-direct method; ``P-GA'' means GA-NoVaS-without-$\\beta$ method. The benchmark is the GARCH-direct method, so numerical values in the table corresponding to GARCH-direct method are 1. Other numerical values are relative values compared to the GARCH-direct method. The bold value means that the corresponding method is the optimal choice for this data case. Cell with $*$ means the GA-NoVaS method is at least 10$\\%$ better than the GE-NoVaS method or inversely the GE-NoVaS method is at least 10$\\%$ better.\n\\end{table}\n\n\\begin{table}[H]\n \\caption{Comparison results of using 2018 and 2019 index data}\n \\setlength{\\abovecaptionskip}{0pt}\n \\label{6t3}\n \\begin{adjustbox}{width=1\\textwidth}\n\\small\n\\begin{tabular}{lcccclcccc}\n \\toprule \n 2018 & \\thead{GE} & \\thead{GA} & \\thead{P-GA} & \\thead{GARCH} & 2019& \\thead{GE} & \\thead{GA} & \\thead{P-GA} & \\thead{GARCH} \\\\ \n \\midrule\n Nasdaq-1step & \\textbf{0.91309} & 0.92303 & 0.92421 & 1.00000 & Nasdaq-1step & 0.99960 & 0.98950 & \\textbf{0.93843} & 1.00000 \\\\[2pt]\n Nasdaq-5steps & \\textbf{0.76419} & 0.79718 & 0.78823 & 1.00000 & Nasdaq-5steps & 1.15282 & 1.09176 & \\textbf{0.84051} & 1.00000 \\\\[2pt]\n Nasdaq-30steps & 0.66520 & \\textbf{0.65489} & 0.67389 & 1.00000 & Nasdaq-30steps & 0.68994 & 0.69846 & \\textbf{0.59218} & 1.00000 \\\\[2pt]\n NYSE-1step & 0.93509 & \\textbf{0.93401} & 0.96619 & 1.00000 & NYSE-1step & 0.92486 & \\textbf{0.91118} & 0.92193 & 1.00000 \\\\[2pt]\n NYSE-5steps & 0.83725 & 0.79330 & \\textbf{0.75822} & 1.00000 & NYSE-5steps & 0.86249 & 0.82114 & \\textbf{0.71038} & 1.00000 \\\\[2pt]\n NYSE-30steps & 0.75053 & \\textbf{0.61443*} & 0.61830 & 1.00000 & NYSE-30steps & 0.22122 & 0.22173 & \\textbf{0.18116} & 1.00000 \\\\[2pt]\n Smallcap-1step & \\textbf{0.90546} & 0.91346 & 0.91101 & 1.00000 & Smallcap-1step & 1.02041 & 1.00626 & \\textbf{0.98482} & 1.00000 \\\\[2pt]\n Smallcap-5steps & \\textbf{0.72627} & 0.73955 & 0.73223 & 1.00000 & Smallcap-5steps & 1.15868 & 1.08929 & \\textbf{0.85490} & 1.00000 \\\\[2pt]\n Samllcap-30steps & 0.50005 & 0.46482 & \\textbf{0.46312} & 1.00000 & Samllcap-30steps & 1.30467 & 1.28949 & \\textbf{0.90360} & 1.00000 \\\\[2pt]\n Djones-1step & 0.90932 & \\textbf{0.90707} & 0.91192 & 1.00000 & Djones-1step & 0.96752 & \\textbf{0.96433} & 0.96977 & 1.00000 \\\\[2pt]\n Djones-5steps & 0.82480 & 0.79965 & \\textbf{0.76226} & 1.00000 & Djones-5steps & 0.98725 & 0.93315 & \\textbf{0.91238} & 1.00000 \\\\[2pt]\n Djones-30steps & 0.72547 & \\textbf{0.53021*} & 0.56854 & 1.00000 & Djones-30steps & 0.86333 & 0.85006 & \\textbf{0.81803} & 1.00000 \\\\[2pt]\n SP500-1step & 0.91860 & 0.91256 & \\textbf{0.88405} & 1.00000 & SP500-1step & 0.96978 & 0.96526 & \\textbf{0.93162} & 1.00000 \\\\[2pt]\n SP500-5steps & 0.85108 & 0.77305 & \\textbf{0.75646} & 1.00000 & SP500-5steps & 0.96704 & 0.94028 & \\textbf{0.77434} & 1.00000 \\\\[2pt]\n SP500-30steps & 0.88917 & \\textbf{0.68156*} & 0.72104 & 1.00000 & SP500-30steps & 0.34389 & 0.34537 & \\textbf{0.30127} & 1.00000 \\\\[2pt]\n BSE-1step & 0.99942 & \\textbf{0.88322*} & 0.92568 & 1.00000 & BSE-1step & 0.70667 & 0.70194 & \\textbf{0.66667} & 1.00000 \\\\[2pt]\n BSE-5steps & 0.92061 & \\textbf{0.78484*} & 0.84408 & 1.00000 & BSE-5steps & 0.25675 & 0.25897 & \\textbf{0.23603} & 1.00000 \\\\[2pt]\n BSE-30steps & 0.52431 & \\textbf{0.41010*} & 0.44092 & 1.00000 & BSE-30steps & 0.03764 & 0.03951 & \\textbf{0.02888} & 1.00000 \\\\[2pt]\n BIST-1step & 0.93221 & \\textbf{0.92215} & 0.94138 & 1.00000 & BIST-1step & \\textbf{0.96807} & 0.97209 & 0.98234 & 1.00000 \\\\[2pt]\n BIST-5steps & 0.82149 & \\textbf{0.79664} & 0.81417 & 1.00000 & BIST-5steps & 0.98944 & 1.03903 & \\textbf{0.85370} & 1.00000 \\\\[2pt]\n BIST-30steps & 1.34581 & 1.42233 & 1.09900 & \\textbf{1.00000} & BIST-30steps & 2.21996 & 2.10562 & \\textbf{0.85743} & 1.00000 \\\\[2pt]\n\n \\bottomrule\n \\end{tabular}\n \\end{adjustbox}\n \\\\\n \\tiny \\textit{Note:} Column names ``GA'' and ``GE'' represent GE-NoVaS and GA-NoVaS methods, respectively; Column name ``GARCH'' means GARCH-direct method; ``P-GA'' means GA-NoVaS-without-$\\beta$ method. The benchmark is the GARCH-direct method, so numerical values in the table corresponding to GARCH-direct method are 1. Other numerical values are relative values compared to the GARCH-direct method. The bold value means that the corresponding method is the optimal choice for this data case. Cell with $*$ means the GA-NoVaS method is at least 10$\\%$ better than the GE-NoVaS method or inversely the GE-NoVaS method is at least 10$\\%$ better.\n\\end{table}\n\n\\subsection{Volatile 1-year data}\\label{ssec: realdatavolatileperiod}\n\\noindent In this subsection, we perform POOS forecasting using volatile 1-year data (i.e., data from Nov. 2019 to Oct. 2020). We tactically choose this period data to challenge our new methods for checking whether it can self-adapt to the structural incoherence between pre- and post-pandemic, and we also want to compare our new methods with the existing GE-NoVaS method. For observing affects of pandemic, we can take the price of SP500 index as an example. From \\cref{6f1}, it is clearly that the price grew slowly during the normal period form Jan. 2017 to Dec. 2017. However, during the most recent one year, the price fluctuated severely due to the pandemic. \n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=9cm,height=6cm]{contrastofsp500.eps}\n\\caption{The left subfigure depicts the price of SP500 from Jan.2017 to Dec.2017 which presents a slow growth; The right subfigure depicts the price of SP500 from Nov.2019 to Oct.2020}\n\\label{6f1}\n\\end{figure}\n\nSimilarly, we focus on evaluating the performance of NoVaS-type methods on handling volatile data by doing comparisons with the GARCH-direct method. For executing a comprehensive analysis, we again investigate different methods' performance on stock, index and currency data. \n\n\\subsubsection{Stock data}\\label{sssec:stock}\n\\noindent The POOS forecasting results of volatile 1-year stock datasets are presented in \\cref{6t4}. NoVaS-type methods dominate the GARCH-direct method. The performance of the GARCH-direct method is terrible especially for the Bitcoin case. Apart from this overall advantage of NoVaS-type methods, there is no doubt that the GA-NoVaS method manifests greater prediction results than the GE-NoVaS method since it occupies 13 out 27 optimal choices and stands at least 10$\\%$ improvement for 5 cases. The parsimonious GA-NoVaS-without-$\\beta$ also shows better results than the GE-NoVaS method. This phenomenon lends strong evidence to support our postulation that the GA-NoVaS method is more appropriate to handle volatile data. \n\n\\begin{table}[htbp]\n \\caption{Comparison results of using volatile 1-year stock data}\n \\label{6t4}\n \n \\centering\n\\scriptsize\n\\begin{tabular}{lcccccccc}\n \\toprule \n & \\thead{\\scriptsize GE-NoVaS} & \\thead{\\scriptsize GA-NoVaS} & \\thead{\\scriptsize GA-NoVaS-without-$\\beta$} & \\thead{\\scriptsize GARCH-direct} \\\\ \n \\midrule\n NKE-1step & 0.63568 & \\textbf{0.63209} & 0.65594 & 1.00000 \\\\ [1.2pt]\n NKE-5steps & 0.20171 & \\textbf{0.19089} & 0.22226 & 1.00000 \\\\ [1.2pt]\n NKE-30steps & 0.00411 & \\textbf{0.00278*} & 0.00340 & 1.00000\\\\[1.2pt]\n AMZN-1step & 0.97099 & 0.96719 & \\textbf{0.90487} & 1.00000 \\\\[1.2pt]\n AMZN-5steps & 0.88705 & 0.88274 & \\textbf{0.72850} & 1.00000\\\\[1.2pt]\n AMZN-30steps & 0.58124 & 0.62863 & \\textbf{0.53310} & 1.00000\\\\[1.2pt]\n IBM-1step & 0.80222 & 0.79823 & \\textbf{0.79509} & 1.00000 \\\\[1.2pt]\n IBM-5steps & 0.38933 & \\textbf{0.37346} & 0.38413 & 1.00000 \\\\[1.2pt]\n IBM-30steps & 0.01143 & 0.00996\\textbf{*} & \\textbf{0.00879} & 1.00000 \\\\[1.2pt]\n MSFT-1step & 0.80133 & \\textbf{0.79528} & 0.81582 & 1.00000 \\\\[1.2pt]\n MSFT-5steps & 0.35567 & \\textbf{0.33419} & 0.38022 & 1.00000 \\\\[1.2pt]\n MSFT-30steps & 0.01342 & 0.01031\\textbf{*} & \\textbf{0.00784} & 1.00000 \\\\[1.2pt]\n SBUX-1step & 0.68206 & 0.67067 & \\textbf{0.66743} & 1.00000 \\\\[1.2pt]\n SBUX-5steps & 0.24255 & \\textbf{0.23072} & 0.26856 & 1.00000 \\\\[1.2pt]\n SBUX-30steps & 0.00499 & 0.00337\\textbf{*} & \\textbf{0.00236} & 1.00000 \\\\[1.2pt]\n KO-1step & 0.77906 & \\textbf{0.75389} & 0.77035 & 1.00000 \\\\[1.2pt]\n KO-5steps & 0.34941 & \\textbf{0.32459} & 0.33405 & 1.00000 \\\\[1.2pt]\n KO-30steps & 0.01820 & 0.01848 & \\textbf{0.01582} & 1.00000 \\\\[1.2pt]\n MCD-1step & 0.51755 & \\textbf{0.51351} & 0.56414 & 1.00000 \\\\[1.2pt]\n MCD-5steps & 0.10725 & \\textbf{0.09714} & 0.17439 & 1.00000 \\\\[1.2pt]\n MCD-30steps & 3.32E-05 & 2.97E-05\\textbf{*} & \\textbf{7.62E-06} & 1.00000 \\\\[1.2pt]\n Tesla-1step & 0.90712 & 0.90250 & \\textbf{0.88782} & 1.00000 \\\\[1.2pt]\n Tesla-5steps & 0.68450 & 0.67935 & \\textbf{0.66937} & 1.00000 \\\\[1.2pt]\n Tesla-30steps & \\textbf{0.21643} & 0.21718 & 0.22395 & 1.00000 \\\\[1.2pt]\n Bitcoin-1step & 0.36323 & \\textbf{0.36260} & 0.36326 & 1.00000 \\\\[1.2pt]\n Bitcoin-5steps & \\textbf{0.01319} & 0.01321 & 0.01322 & 1.00000 \\\\[1.2pt]\n Bitcoin-30steps & 7.75E-17 & \\textbf{7.65E-17} & 7.75E-17 & 1.00000 \\\\[1.2pt]\n \\bottomrule\n \\end{tabular}\n \n \\\\\n \\raggedright\n \\tiny \\textit{Note:} The benchmark is the GARCH-direct method, so numerical values in the table corresponding to GARCH-direct method are 1. Other numerical values are relative values compared to the GARCH-direct method. The bold value means that the corresponding method is the optimal choice for this data case. Cell with $*$ means the GA-NoVaS method is at least 10$\\%$ better than the GE-NoVaS method or inversely the GE-NoVaS method is at least 10$\\%$ better.\n\\end{table}%\n\n\\subsubsection{Currency data}\\label{sssec:currency}\n\\noindent The POOS forecasting results of most recent 1-year currency datasets are presented in \\cref{6t5}. One thing should be noticed is that \\citet{fryzlewicz2008normalized} implied the ARCH framework seems to be a superior methodology for dealing with the currency exchange data. Therefore, we should not anticipate that GA-NoVaS-type methods can attain much improvement for this data case. However, the GA-NoVaS method still brings off around 26$\\%$ and 37$\\%$ improvement for 30-steps ahead predictions of CADJPY and CNYJPY, respectively. Besides, the GA-NoVaS-without-$\\beta$ method also remains great performance. This surprising result can be seen as an evidence to show GA-NoVaS-type methods are robust to model misspecification.\n\n\\begin{table}[htbp]\n \\caption{Comparison results of using volatile 1-year currency data}\n \\label{6t5}\n \\centering\n \n\\scriptsize\n\\begin{tabular}{lcccccccc}\n \\toprule \n & \\thead{\\scriptsize GE-NoVaS} & \\thead{\\scriptsize GA-NoVaS} & \\thead{\\scriptsize GA-NoVaS-without-$\\beta$} & \\thead{\\scriptsize GARCH-direct} \\\\ \n \\midrule\n CADJPY-1step & 0.46940 & \\textbf{0.46382} & 0.48367 & 1.00000 \\\\[1.2pt]\n CADJPY-5steps & 0.11678 & \\textbf{0.11620} & 0.14376 & 1.00000 \\\\[1.2pt]\n CADJPY-30steps & 0.00584 & \\textbf{0.00430*} & 0.00482 & 1.00000 \\\\[1.2pt]\n EURJPY-1step & 0.95093 & \\textbf{0.94682} & 0.95133 & 1.00000 \\\\[1.2pt]\n EURJPY-5steps & 0.76182 & 0.77091 & \\textbf{0.75636} & 1.00000 \\\\[1.2pt]\n EURJPY-30steps & \\textbf{0.16202} & 0.17956 & 0.18189 & 1.00000 \\\\[1.2pt]\n USDCNY-1step & 0.98905 & 0.97861 & \\textbf{0.95757} & 1.00000 \\\\[1.2pt]\n USDCNY-5steps & 0.93182 & 0.92614 & \\textbf{0.83523} & 1.00000 \\\\[1.2pt]\n USDCNY-30steps & 0.57171 & \\textbf{0.57100} & 0.60131 & 1.00000 \\\\[1.2pt]\n GBPJPY-1step & 0.86971 & \\textbf{0.86474} & 0.87160 & 1.00000 \\\\[1.2pt]\n GBPJPY-5steps & 0.49749 & 0.49612 & \\textbf{0.48842} & 1.00000 \\\\[1.2pt]\n GBPJPY-30steps & 0.17058 & \\textbf{0.16987} & 0.17262 & 1.00000 \\\\[1.2pt]\n USDINR-1step & 0.97289 & 0.96829 & \\textbf{0.93140} & 1.00000 \\\\[1.2pt]\n USDINR-5steps & 0.80866 & 0.78008 & \\textbf{0.75693} & 1.00000 \\\\[1.2pt]\n USDINR-30steps & \\textbf{0.09725} & 0.09889 & 0.11380 & 1.00000 \\\\[1.2pt]\n CNYJPY-1step & 0.77812 & 0.77983 & \\textbf{0.74586} & 1.00000 \\\\[1.2pt]\n CNYJPY-5steps & 0.38875 & 0.38407 & \\textbf{0.34839} & 1.00000 \\\\[1.2pt]\n CNYJPY-30steps & 0.08398 & \\textbf{0.05240*} & 0.05444 & 1.00000 \\\\[1.2pt]\n \\bottomrule\n \\end{tabular}\n \n\\end{table}%\n\n\n\\subsubsection{Index data}\\label{sssec:index}\n\\noindent The POOS forecasting results of most recent 1-year index datasets are presented in \\cref{6t6}. Consistent with conclusions corresponding to previous two classes of data, NoVaS-type methods still have obviously better performance than the GARCH-direct method. Besides this advantage of NoVaS methods, new methods still govern the existing GE-NoVaS method. In addition to these expected results, we find the GE-NoVaS method is even 14$\\%$ worse than the GARCH-direct method for 1-step USDX future case. On the other hand, GA-NoVaS-type methods still keep great performance. This phenomenon also appears in \\cref{sssec:simuresultsmoeld1-5,sssc:simudifferent,ssc:simusmall,ssec:realdatanormalperiod2years,ssec:realdatanormalperiod1year}. Beyond this, there are 12 cases that the GA-NoVaS method renders more than 10$\\%$ improvement compared to the GE-NoVaS method. A most significant case is the 30-steps ahead prediction of Bovespa data where around 60$\\%$ improvement is introduced by the GA-NoVaS method compared with GE-NoVaS method. \n\\begin{table}[htbp]\n \\caption{Comparison results of using volatile 1-year index data}\n \\label{6t6}\n \\centering\n \n\\scriptsize\n\\begin{tabular}{lcccccccc}\n \\toprule \n & \\thead{\\scriptsize GE-NoVaS} & \\thead{\\scriptsize GA-NoVaS} & \\thead{\\scriptsize GA-NoVaS-without-$\\beta$} & \\thead{\\scriptsize GARCH-direct} \\\\ \n \\midrule\n SP500-1step & 0.97294 & 0.95881 & \\textbf{0.92854} & 1.00000 \\\\[1.2pt]\n SP500-5steps & 0.96590 & 0.94457 & \\textbf{0.77060} & 1.00000 \\\\[1.2pt]\n SP500-30steps & 0.34357 & 0.34561 & \\textbf{0.30115} & 1.00000 \\\\[1.2pt]\n Nasdaq-1step & 0.71380 & \\textbf{0.70589} & 0.77753 & 1.00000 \\\\[1.2pt]\n Nasdaq-5steps & 0.29332 & \\textbf{0.27007} & 0.36428 & 1.00000 \\\\[1.2pt]\n Nasdaq-30steps & 0.01223 & \\textbf{0.00618*} & 0.00696 & 1.00000 \\\\[1.2pt]\n NYSE-1step & 0.55741 & 0.55548 & \\textbf{0.54598} & 1.00000 \\\\[1.2pt]\n NYSE-5steps & 0.08994 & \\textbf{0.07666*} & 0.07798 & 1.00000 \\\\[1.2pt]\n NYSE-30steps & 1.36E-05 & 9.06E-06\\textbf{*} & \\textbf{6.57E-06} & 1.00000 \\\\[1.2pt]\n Smallcap-1step & 0.58170 & \\textbf{0.57392} & 0.57773 & 1.00000 \\\\[1.2pt]\n Smallcap-5steps & 0.10270 & 0.10135 & \\textbf{0.09628} & 1.00000 \\\\[1.2pt]\n Smallcap-30steps & 7.00E-05 & 4.33E-05\\textbf{*} & \\textbf{3.65E-05} & 1.00000 \\\\[1.2pt]\n BSE-1step & 0.39493 & \\textbf{0.37991} & 0.39851 & 1.00000 \\\\[1.2pt]\n BSE-5steps & 0.03320 & \\textbf{0.02829*} & 0.04170 & 1.00000 \\\\[1.2pt]\n BSE-30steps & 2.45E-05 & 2.19E-05\\textbf{*} & \\textbf{1.73E-05} & 1.00000 \\\\[1.2pt]\n DAX-1step & \\textbf{0.65372} & 0.65663 & 0.66097 & 1.00000 \\\\[1.2pt]\n DAX-5steps & 0.10997 & \\textbf{0.10828} & 0.11085 & 1.00000 \\\\[1.2pt]\n DAX-30steps & 4.97E-05 & \\textbf{4.87E-05} & 7.81E-05 & 1.00000\\\\[1.2pt]\n USDX future-1step & 1.14621 & 1.00926\\textbf{*} & 1.03693 & \\textbf{1.00000} \\\\[1.2pt]\n USDX future-5steps & 0.61075 & 0.53834\\textbf{*} & \\textbf{0.51997} & 1.00000 \\\\[1.2pt]\n USDX future-30steps & 0.10723 & \\textbf{0.09911} & 0.10063 & 1.00000 \\\\[1.2pt]\n Bovespa-1step & 0.60031 & \\textbf{0.57316} & 0.60656 & 1.00000 \\\\[1.2pt]\n Bovespa-5steps & 0.08603 & \\textbf{0.06201*} & 0.09395 & 1.00000 \\\\[1.2pt]\n Bovespa-30steps & 6.87E-06 & \\textbf{2.82E-06*} & 3.19E-06 & 1.00000 \\\\[1.2pt]\n Djones-1step & 0.56357 & 0.55020 & \\textbf{0.54422} & 1.00000 \\\\[1.2pt]\n Djones-5steps & 0.09810 & \\textbf{0.08239*} & 0.08698 & 1.00000 \\\\[1.2pt]\n Djones-30steps & 4.32E-05 & \\textbf{2.22E-05*} & 2.65E-05 & 1.00000 \\\\[1.2pt]\n BIST-1step & 0.94794 & 0.95313 & \\textbf{0.92418} & 1.00000 \\\\[1.2pt]\n BIST-5steps & \\textbf{0.48460} & 0.49098 & 0.49279 & 1.00000 \\\\[1.2pt]\n BIST-30steps & \\textbf{0.05478} & 0.05980 & 0.05671 & 1.00000 \\\\[1.2pt]\n \\bottomrule\n \\end{tabular}\n \n\\end{table}\n\n\\subsection{Summary of real-world data analysis}\\label{ssec:summaryofrealdataanalysis}\n\\noindent After performing extensive real-world data analysis, we can conclude that NoVaS-type methods have generally better performance than the GARCH-direct method. Sometimes, the long-term prediction of GARCH-direct method is impaired due to accumulated errors. Applying NoVaS-type methods can avoid such issue. In addition to this encouraging result, two new NoVaS methods proposed in this article all have greater performance than the existing GE-NoVaS method, especially for analyzing short and volatile data. The satisfactory performance of NoVaS-type methods on predicting Bitcoin data may also open up the application of using NoVaS-type methods to forecast cryptocurrency data. \n\n\n\\section{Comparison of predictive accuracy}\\label{sec:comparisonofpredictive}\nAs illustrated in \\cref{sec:intro}, accurate and robust volatility forecasting is an important focus for econometricians. Typically, volatility of returns can be characterized by GARCH-type models. Then, with the Model-free Prediction Principle being proposed, a more accurate NoVaS method was built to predict volatility. This paper further improves the existing NoVaS method by proposing a new transformation structure in \\cref{sec:method}. After performing extensive POOS predictions on different classes of data, we find our new methods achieve better prediction performance than traditional GARCH(1,1) model and the existing GE-NoVaS method. The most successful method is the GA-NoVaS-without-$\\beta$ method. \n\nHowever, one may still think the victory of our new methods is just caused by using specific sample even new methods show lower prediction error (i.e., calculated by \\cref{eq:4.1}) for almost all cases. Therefore, we want to learn whether this victory is statistically\nsignificant. We shall notice that \\cite{wu2021boosting} applied CW-tests to show removing-$\\beta$ idea is appropriate to refine the GE-NoVaS method. Likewise, we are curious about if this refinement is again reasonable for deriving the GA-NoVaS-without-$\\beta$ method from the GA-NoVaS method. In this paper, we focus on the CW-test built by \\cite{clark2007approximately}\\footnote{See \\cite{clark2007approximately} for theoretical details of this test, explaining these details is not in the scope of this paper.} which applied an adjusted Mean Squared Prediction Error (MSPE) statistics to test if parsimonious null model and larger model have equal predictive accuracy, see \\cite{dangl2012predictive,kong2011predicting,dai2021predicting} for examples of applying this CW-test.\n\n\n\n\\subsection{CW-test}\nNote that the GA-NoVaS-without-$\\beta$ method is a parsimonious method compared with the GA-NoVaS method. The reason of removing the $\\beta$ term has been illustrated in \\cref{ssecmotivation}. Here, we want to deploy the CW-test to make sure the $\\beta$-removing idea is not only empirically adoptable but also statistically reasonable. We take several results from \\cref{sec:real data} to run CW-tests. However, it is tricky to apply the CW-test on comparing 5-steps and 30-steps aggregated predictions. In other words, the CW-test result for aggregated predictions is ambiguous. It is hard to explain the meaning of a significant small $p$-value. Does this mean a method outperforms the opposite one for all single-step horizons? Or does this mean the method just achieves better performance at some specific future steps? Therefore, we just consider 1-step ahead prediction horizon and CW-test results are tabulated in \\cref{7t1}. \n\nFrom \\cref{7t1}, under a one-sided 5$\\%$ significance level, there is only 1 case out of total 28 cases which rejects the null hypothesis. Besides, we should notice that the CW-test still accepts the null hypothesis for 2018-MSFT and volatile period of MCD even the GA-NoVaS method has a better performance value on these cases. Moreover, the GA-NoVaS-without-$\\beta$ is more computationally efficient than the GA-NoVaS method. In summary, the reasonability of removing $\\beta$ term is shown again by comparing GA-NoVaS and GA-NoVaS-without-$\\beta$ methods. \n\n\\begin{table}[H]\n\\centering\n \\caption{CW-tests on 1-step ahead prediction of GA-NoVaS and GA-NoVaS-without-$\\beta$ methods}\n \\label{7t1}\n \\scriptsize \n\\begin{tabular}{lccc}\n \\toprule \n & \\thead{\\scriptsize P-value} & \\thead{\\scriptsize GA-NoVaS\\\\ \\scriptsize Performance} & \\thead{\\scriptsize GA-NoVaS-without-$\\beta$ \\\\ \\scriptsize Performance} \\\\\n \\midrule\n 2018-AAPL-1step & 0.99 & 0.92 & 0.89 \\\\ \n 2019-AAPL-1step & 0.08 & 0.81 & 0.82 \\\\ \n 2018-BAC-1step & 0.63 & 0.94 & 0.93 \\\\ \n 2019-BAC-1step & 0.49 & 1.05 & 0.99 \\\\ \n 2018-TSLA-1step & 0.27 & 0.92 & 0.86 \\\\ \n 2019-TSLA-1step & 0.22 & 1.02 & 0.99 \\\\\n 2018-MCD-1step & 0.57 & 0.98 & 0.94 \\\\ \n 2019-MCD-1step & 0.19 & 0.96 & 0.95 \\\\\n 2018-MSFT-1step & 0.17 & 0.91 & 0.96 \\\\\n 2019-MSFT-1step & 0.47 & 1.00 & 0.95 \\\\ \n 2018-Djones-1step & 0.64 & 0.91 & 0.91 \\\\ \n 2019-Djones-1step & 0.27 & 0.96 & 0.97 \\\\ \n 2018-Nasdaq-1step & 0.51 & 0.92 & 0.92 \\\\ \n 2019-Nasdaq-1step & 0.48 & 0.99 & 0.94 \\\\ \n 2018-NYSE-1step & 0.31 & 0.93 & 0.97 \\\\ \n 2019-NYSE-1step & 0.11 & 0.91 & 0.92 \\\\ \n 2018-SP500-1step & 0.42 & 0.91 & 0.88 \\\\ \n 2019-SP500-1step & 0.32 & 0.97 & 0.93 \\\\ \n 11.2019$\\sim$10.2020-IBM-1step & 0.26 & 0.80 & 0.80 \\\\ \n 11.2019$\\sim$10.2020-KO-1step & 0.01 & 0.75 & 0.77 \\\\ \n 11.2019$\\sim$10.2020-MCD-1step & 0.14 & 0.51 & 0.56 \\\\ \n 11.2019$\\sim$10.2020-SBUX-1step & 0.18 & 0.67 & 0.67 \\\\ \n 11.2019$\\sim$10.2020-CADJPY-1step & 0.07 & 0.46 & 0.48 \\\\ \n 11.2019$\\sim$10.2020-CNYJPY-1step & 0.66 & 0.78 & 0.75 \\\\ \n 11.2019$\\sim$10.2020-USDCNY-1step & 0.36 & 0.98 & 0.96 \\\\ \n 11.2019$\\sim$10.2020-EURJP-1step & 0.19 & 0.95 & 0.95 \\\\ \n 11.2019$\\sim$10.2020-Djones-1step & 0.30 & 0.56 & 0.55 \\\\ \n 11.2019$\\sim$10.2020-SP500-1step & 0.25 & 0.59 & 0.58 \\\\ \n\n \\bottomrule\n \\end{tabular}\\\\\n \\tiny\n \\raggedright\n \\textit{Note:} The null hypothesis of the CW-test is that parsimonious and larger models have equal MSPE. The alternative is that the larger model has a smaller MSPE. The performance of GA-NoVaS and GA-NoVaS-without-$\\beta$ methods are calculated as we did in \\cref{sec:real data}, which are relative values compared with benchmark method (GARCH-direct method).\n\\end{table}\n\n\\section{Conclusion}\\label{sec:conclusion}\n\\noindent In this paper, we show the current state-of-the-art GE-NoVaS and our proposed new methods can avoid error accumulation problem even when long-step ahead predictions are required. These methods outperform GARCH(1,1) model on predicting either simulated data or real-world data under different forecasting horizons. Moreover, the newly proposed GA-NoVaS method is a more stable structure to handle volatile and short data than the GE-NoVaS method. It can also bring significant improvement when the long-term prediction is desired. Additionally, although we reveal that parsimonious variants of GA-NoVaS and GE-NoVaS indeed possess a same structure, the GA-NoVaS-without-$\\beta$ method is still more favorable since the corresponding region of model parameter is more complete by design. In summary, the approach to build the NoVaS transformation through the GARCH(1,1) model is sensible and results in superior GA-NoVaS-type methods.\n\nIn the future, we plan to explore the NoVaS method in different directions. Our new methods corroborate that and also open up avenues where one can explore other specific transformation structures. In the financial market, the stock data move together. So it would be exciting to see if one can do Model-free predictions in a multiple time series scenario. In some areas, integer-valued time series has important applications. Thus, adjusting such Model-free predictions to deal with count data is also desired. There are also a lot of scopes in proving statistical validity of such predictions. First, we hope a rigorous and systematic way to compare predictive accuracy of NoVaS-type and standard GARCH method can be built. From a statistical inference point of view, one can also construct prediction intervals for these predictions using bootstrap. Such prediction intervals are well sought in the econometrics literature and some results on asymptotic validity of these can be proved. We can also explore dividing the dataset into test and training in some optimal way and see if that can improve performance of these methods. Additionally, since determining the transformation function involves optimization of unknown coefficients, designing a more efficient and precise algorithm may be a further direction to improve NoVaS-type methods. \n\\section{Acknowledgement}\\label{sec:ackno}\nThe first author is thankful to Professor Politis for introduction to the topic and useful discussions. The second author's research is partially supported by NSF-DMS 2124222.\n\n\\section{Data Availability Statement}\\label{sec:dataav}\nWe have collected all data presented here from \\url{www.investing.com} manually. Then, we transform the closing price data to financial log-returns based on \\cref{Eq:4.1}.\n\n\n\\bibliographystyle{spbasic}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\nLet $(\\mathsf{M},\\mathsf{J})$ be a compact, connected, \\emph{almost complex} manifold of dimension $2n$ and let $\\mathsf{c}=\\sum_{i=1}^n \\mathsf{c}_i$ be the total Chern class of the tangent bundle of $\\mathsf{M}$. \nTo each partition of $n$ one can associate an integer, called \\emph{Chern number}, given by\n$\\mathsf{c}_{i_1}\\cdots \\mathsf{c}_{i_k}[\\mathsf{M}]:=\\langle \\mathsf{c}_{i_1}\\cdots \\mathsf{c}_{i_k}, \\mu_\\mathsf{M}\\rangle$, where $i_1+\\cdots +i_k=n$ and $\\mu_\\mathsf{M}$ is the orientation homology class of $\\mathsf{M}$ (the orientation being induced by the almost complex structure). \nThe problem of determining which lists of integers can arise \nas the Chern numbers of a compact almost complex manifold $(\\mathsf{M},\\mathsf{J})$ of a given dimension (also known as the \\emph{geography} problem) has been investigated in different settings. Without additional assumptions on $(\\mathsf{M},\\mathsf{J})$, a theorem of Milnor \\cite{H} implies that it is necessary and sufficient for these integers to satisfy a certain set of congruences depending on $n$ (the same is true if $\\mathsf{M}$ is connected and $n\\geq 2$, see \\cite{Ge}). \nHowever, if the manifold is endowed with a $\\mathsf{J}$-preserving circle action, further restrictions arise and the geography problem is, in its generality, still open. When the fixed point set is empty (or more generally when all the stabilisers are discrete), as a consequence of the Atiyah--Bott--Berline--Vergne localization formula (hereinafter the ABBV formula, see Thm.\\ \\ref{abbv formula}) all the Chern numbers must vanish. \nIn this note we are interested\nin the case in which the fixed point set is non-empty and discrete (for recent results concerning non-isolated fixed points\nsee \\cite{Ku}). \n\n\\emph{Henceforth, the triple $(\\mathsf{M},\\mathsf{J},S^1)$ will denote a compact, connected, almost complex manifold acted on by a circle $S^1$ that preserves $\\mathsf{J}$, with nonempty, discrete fixed point set $\\mathsf{M}^{S^1}$, and will be referred to as an} {\\bf $S^1$-space}. \n\nThe Chern numbers of $S^1$-spaces satisfy more restrictions. \nFor instance, as a consequence of the ABBV formula, it is easy to see that \n$\\mathsf{c}_n[\\mathsf{M}] = \\chi(\\mathsf{M}) = |\\mathsf{M}^{S^1}|$,\nthus implying that $\\mathsf{c}_n[\\mathsf{M}]>0$, which is not true in general for any almost complex manifold $(\\mathsf{M},\\mathsf{J})$.\nIn 1979 Kosniowski \\cite{Ko} conjectured that the number of fixed points, and hence $\\mathsf{c}_n[\\mathsf{M}]$, grows linearly with $n$; more precisely\nhe predicted that $\\mathsf{c}_n[\\mathsf{M}] \\geq \\left \\lceil{\\frac{n}{2}}\\right \\rceil$. Even if much progress has been done to prove Kosniowski's conjecture (see \\cite{Ha}, and more recently \\cite{LL,LT,PT,CKP,GPS,J}), a complete answer is still missing. This\n shows that the geography problem for an $S^1$-space $(\\M,\\J,S^1)$ is much harder, and\nthe following questions naturally arise:\n\\begin{question}\\label{conj 1}\nWhat are all the possible values of the Chern numbers of $(\\M,\\J,S^1)$? Are there other (combinations of) Chern numbers satisfying (in)equalities depending on $n$?\n\\end{question}\nThe first goal of this note is to show that the Chern numbers of $(\\M,\\J,S^1)$ satisfy equations that depend on two integers,\nthe \\emph{index} $\\k0$ of $(\\mathsf{M},\\mathsf{J})$, and an integer $N_0$ defined by the action, see below. The second is to apply these results to symplectic manifolds supporting symplectic circle actions with discrete fixed point set, \nshowing `rigidity' results for the Chern numbers, and deriving topological conditions which ensure the manifold can only support Hamiltonian or only non-Hamiltonian actions, see Section \\ref{atsm}.\n\n Let $\\mathsf{c}_1\\in H^2(\\mathsf{M};{\\mathbb{Z}})$ be the first Chern class of the tangent bundle. The \\emph{index} $\\k0$ of $(\\mathsf{M},\\mathsf{J})$ is defined to be the largest integer such that, modulo torsion, $\\mathsf{c}_1=\\k0\\,\\eta_0$ for some non-zero element $\\eta_0\\in H^2(\\mathsf{M};{\\mathbb{Z}})$. In other words, $\\k0=0$ if $\\mathsf{c}_1$ is torsion, and is otherwise the biggest integer\n such that $\\mathsf{c}_1\/\\k0\\in H^2(\\mathsf{M};{\\mathbb{Z}})$, modulo torsion elements.\nWhen $\\mathsf{M}$ is simply connected and symplectic, the index coincides with the \\emph{minimal Chern number} (see Remark \\ref{mcn}).\nNote that $\\mathsf{M}$ is simply connected if it is endowed with a Hamiltonian circle action with isolated fixed points, see \\cite{Li2}.\nThe other integer $N_0$ depends on the action, and is defined as \nthe \\emph{number of fixed points with $0$ negative weights} (see Section \\ref{background} \\eqref{weights def}). \n\nWhen $\\k0=0$, namely when $\\mathsf{c}_1$ is a torsion element, all the Chern numbers involving the first Chern class, as well as the Todd genus (see Lemma \\ref{c1 N0} (a2)), must vanish.\n\n In this paper we are interested in\nanalysing what happens when $\\k0>0$, and a careful analysis is carried out when $\\k0\\geq n-2$. \nWhen $\\mathsf{c}_1$ is not torsion, the aforementioned equations among the Chern numbers of $(\\M,\\J,S^1)$ are derived by analysing the zeros and the symmetries of the \n\\emph{Hilbert polynomial} of $(\\mathsf{M},\\mathsf{J})$, which is defined as follows.\nLet $\\mathbb{L}_0\\to \\mathsf{M}$ be a line bundle whose first Chern class $\\mathsf{c}_1(\\mathbb{L}_0)$ is $\\eta_0=\\frac{\\mathsf{c}_1}{\\k0}$. \nThen the Hilbert polynomial $\\Hi(z)$ is the polynomial in $\\mathbb{R}[z]$ that, at integer values $k\\in {\\mathbb{Z}}$, gives the topological index of the bundle $\\mathbb{L}_0^k$, the $k$-tensor power of\n$\\mathbb{L}_0$ (note that $\\eta_0$ is only defined up to torsion, however $\\Hi(z)$ does not depend on this choice, see Sect.\\ \\ref{equations chern}).\nBy the Atiyah-Singer formula, for every $k\\in {\\mathbb{Z}}$, the integer $\\Hi(k)$ can be expressed in terms of Chern numbers of $(\\M,\\J,S^1)$:\n\\begin{equation}\\label{H and c}\n\\Hi(k)=\\left( \\sum_{h=0}^n \\frac{(k \\,\\eta_0)^h}{h!}\\right)\\ttot[\\mathsf{M}]\n= \\left( \\sum_{h=0}^n \\frac{(k \\,\\mathsf{c}_1)^h}{\\k0^h\\,h!}\\right)\\left( 1+\\frac{\\mathsf{c}_1}{2}+ \\frac{\\mathsf{c}_1^2+\\mathsf{c}_2}{12}+\\cdots\\right)[\\mathsf{M}]\n\\end{equation}\nwhere $\\ttot=\\sum_{j\\geq 0}T_j= 1+\\frac{\\mathsf{c}_1}{2}+ \\frac{\\mathsf{c}_1^2+\\mathsf{c}_2}{12}+\\cdots$ is the total Todd class of $\\mathsf{M}$, and $T_j\\in H^{2j}(\\mathsf{M};{\\mathbb{Z}})$ the Todd polynomials of $(\\mathsf{M},\\mathsf{J})$, for $j=0,\\ldots,n$, namely the polynomials in the Chern classes of $(\\mathsf{M},\\mathsf{J})$ belonging to the power series $\\frac{x}{1-e^{-x}}$. Note that in particular $\\Hi(0)=T_n[\\mathsf{M}]$, the Todd genus of $\\mathsf{M}$, which in turn is equal to $N_0$ (Proposition \\ref{properties P} (1)).\n\nUsing equivariant extensions of $\\mathbb{L}_0^k$ and localization in equivariant $K$-theory (the Atiyah-Segal formula \\eqref{AS formula}),\nit is proved that changing the orientation on $S^1$ implies the following\n `\\emph{reciprocity law}' for $\\Hi(z)$ (Propositions \\ref{symmetries} and \\ref{properties P} (2)):\n\\begin{equation}\\label{reciprocity}\n\\Hi(z)=(-1)^n \\Hi(-\\k0-z)\\,.\n\\end{equation}\nThis generalises, in the sense described in Sect.\\ \\ref{connections ehrhart}, a reciprocity law known for the Ehrhart polynomial of a reflexive polytope due to Hibi \\cite{Hibi}.\n\nThe next theorem is the key result of Section \\ref{equations chern}:\n\\begin{theorem}\\label{main theorem}\nLet $(\\mathsf{M},\\mathsf{J}, S^1)$ be an $S^1$-space. \nAssume that the index $\\k0$ of $(\\mathsf{M},\\mathsf{J})$ is greater or equal to $2$. Let $\\Hi(z)$ be the associated Hilbert polynomial and $\\deg(\\Hi)$ its degree.\nThen \\\\\n\\begin{align}\\label{H=0 even}\n & \\Hi(-1)=\\Hi(-2)=\\cdots = \\Hi(-\\k0+1)=0\\,.\n \\end{align}\n $\\;$\\\\\nMoreover, if $\\Hi(z)\\not\\equiv 0$, then \n\\begin{equation}\\label{bound k0}\n \\k0\\leq \\deg(\\Hi)+1\\leq n+1\\,.\n\\end{equation}\n\\end{theorem}\nEquations \\eqref{H and c} and \\eqref{H=0 even} suggest that studying the Chern numbers of $(\\M,\\J,S^1)$ for large values of $\\k0$ is easier.\n\nIn Sect.\\ \\ref{sec: generating fct} it is proved that, as a consequence of \\eqref{reciprocity} and \\eqref{H=0 even},\n\\emph{the number of conditions that determine the coefficients of $\\Hi(z)$ is the same for $\\k0=n+1-2k$ and $\\k0=n-2k$, for every $k\\in {\\mathbb{Z}}$ such that $0\\leq k \\leq \\frac{n-1}{2}$} (see Remark \\ref{num of cds}). \nThis follows from the fact that the generating function of $\\Hi(z)$ is\n a rational function of the form $\\Gen(t)=\\mathrm{U}(t)\/(1-t)^{\\deg(\\Hi)+1}$, where $\\mathrm{U}(t)$ is a polynomial which ---up to a power of $t$---\n is \\emph{self-reciprocal} or \\emph{palindromic} (see Proposition \\ref{gen fct hilbert} and Corollary \\ref{U palindrom}). \n\nUsing the results above we prove that, for $\\k0\\in \\{n,\\,n+1\\}$, $\\Hi(z)$ is completely determined by $N_0$; more precisely we prove that $\\Hi(z)=N_0 \\Hi_{\\overline{M}}(z)$, the manifold $\\overline{M}$ being ${\\mathbb{C}} P^n$ for $\\k0=n+1$ and the hyperquadric $Q_n$ in ${\\mathbb{C}} P^{n+1}$ for $\\k0=n$. This gives equations for the combinations of Chern numbers $\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]$ in terms of $n$, for every $h=0,\\ldots,n$, and in particular the values of $\\mathsf{c}_1^n[\\mathsf{M}]$ and $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ (see Propositions \\ref{cor n+1} and \\ref{cor n}). \n \nWhen $\\k0=n-1$ (and $n\\geq 2$) or $\\k0=n-2$ (and $n\\geq 3$), $\\Hi(z)$ and the combinations of Chern numbers $\\mathsf{c}_1^h\\,T_{n-h}[\\mathsf{M}]$, for $h=0,\\ldots,n$, depend on a parameter. We compute explicitly their expressions in terms of this parameter (Propositions \\ref{k0=n-1} and \\ref{k0=n-2}) and determine a linear equation in $\\mathsf{c}_1^n[\\mathsf{M}]$ and $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ which depends on $n$ and $N_0$: this is the content of Corollary \\ref{relation c122} and Corollary \\ref{relation c122 2}. \n\nInter alia, we study the position of the roots of $\\Hi(z)$ for $\\k0\\geq n-2$ and $\\k0\\neq 0$, making\nconnections with the work of Rodriguez-Villegas \\cite{RV} and Golyshev \\cite{Go}.\n \nFinally, in Section \\ref{examples} we investigate how in low dimensions the Chern numbers of $(\\mathsf{M},\\mathsf{J},S^1)$ depend on the integers $N_j$, for $j=0,\\ldots,n$, defined as the number\nof fixed points with $j$ negative weights. \nFor instance, we prove that for $\\k0=n$ or $n+1$, and $n\\leq 4$, all the Chern numbers of $(\\mathsf{M},\\mathsf{J},S^1)$ can be expressed as linear combinations of the $N_j$'s,\nand when $n=2$ having $\\k0=2$ or $3$ implies relations among the $N_j$'s. \n\n\\medskip\n\n\\subsection{Applications to symplectic manifolds}\\label{atsm} \nIn order to apply the results we obtained for almost complex manifolds to symplectic manifolds,\nlet $\\mathsf{J}\\colon T\\mathsf{M} \\to T\\mathsf{M}$ be an almost complex structure compatible with $\\omega$, namely $\\omega(\\cdot, \\mathsf{J} \\cdot)$ is a Riemannian metric. Since the set of such structures\nis contractible, we can define complex invariants of $T\\mathsf{M}$, namely Chern classes and Chern numbers. \n\nLet $(\\mathsf{M},\\omega)$ be a compact, connected symplectic manifold endowed with a symplectic circle action with isolated fixed points. \\emph{Such a space is henceforth denoted by $(\\mathsf{M},\\omega,S^1)$}. \nIt follows that the $1$-form $\\iota_{\\xi^\\#}\\omega$ is closed; here $\\xi^\\#$ denotes the vector field generated by the circle action. \nIf the $1$-form $\\iota_{\\xi^\\#}\\omega$ is \\emph{exact} the action is said to be \\emph{Hamiltonian}, otherwise we call it \\emph{non-Hamiltonian}.\nIn the first case, if $\\psi\\colon \\mathsf{M}\\to \\mathbb{R}$ is a function satisfying\n$\n\\iota_{\\xi^\\#}\\omega=-d\\psi\\,,\n$\nthen $\\psi$ is called a \\emph{moment map} for the $S^1$-action. \n\nThe first consequence of Theorem \\ref{main theorem} in the symplectic category follows from the fact that,\nif the action is Hamiltonian, $\\Hi(z)$ can never be\nidentically zero (see Remark \\ref{H Ham}), and the index coincides with the minimal Chern number (see Remark \\ref{mcn}), leading to the following\n\\begin{corollary}\\label{minimal chern ham}\nLet $(\\mathsf{M},\\omega)$ be a compact, connected symplectic manifold of dimension $2n$.\n If $(\\mathsf{M},\\omega)$ supports a Hamiltonian $S^1$-action with isolated fixed points, then its minimal Chern number coincides with the index $\\k0$, and the following inequalities hold $$1\\leq \\k0 \\leq n+1.$$\n\\end{corollary}\n This result can be considered the analogue in the Hamiltonian category of a theorem of Michelsohn \\cite[Cor.\\ 7.17]{Mi}, which asserts that the index of a compact complex manifold admitting\n a K\\\"ahler metric with positive Ricci curvature is at most $n+1$. The same conclusion also holds if $(\\mathsf{M},\\mathsf{J})$ is a compact almost complex\n manifold which can be endowed with a quasi-ample line bundle; this result is due to Hattori \\cite{Ha} and is discussed in Remark \\ref{hattori rmk}.\n If a compact symplectic manifold can be endowed with a non-Hamiltonian circle action with isolated fixed points, then\n there are three possibilities for the index and the Hilbert polynomial (see Corollary \\ref{bound on k0 s} and Remark \\ref{rmk 1}).\n \nThere are plenty of examples of compact symplectic manifolds that can be\nendowed with a Hamiltonian circle action with isolated fixed points. Until not so long ago, it was indeed believed that\nevery symplectic circle action with isolated fixed points would be\nHamiltonian.\nThis is sometimes also known as the `McDuff conjecture', and holds\\footnote{For $n=1$, the only compact symplectic surface that can be endowed\nwith a symplectic circle action with isolated fixed points is the sphere, which is simply connected, hence the action is Hamiltonian. \nFor $n=2$ the same conclusion holds by a result of McDuff in \\cite{MD1}.\nIn the same paper the author also proves the existence of a six-dimensional compact symplectic manifold with a non-Hamiltonian action, but the fixed point set is not discrete.}\n for $n=1$ and $2$ \\cite{MD1}, as well as in many other particular cases (see for instance \\cite{Fe,Fr,Go1,Go2,L,Ono,TW,J}).\nIt is only very recently that Tolman announced the following striking result:\n\\begin{thm}[Tolman '15 \\cite{T3}]\\label{tolman 6}\nThere exists a non-Hamiltonian symplectic circle action with exactly 32 fixed points on a closed, connected, six-dimensional symplectic manifold $(\\widetilde{M},\\omega)$.\n\\end{thm}\nThis theorem implies the existence of a non-Hamiltonian symplectic circle action with discrete fixed point set for every $n\\geq 3$: it is sufficient to take products $\\widetilde{M}\\times M$, where $M$ is a compact symplectic manifold endowed with a Hamiltonian circle action with $|M^{S^1}|<\\infty$\n(see also \\cite[Cor.\\ 1.2]{T3}, where $M={\\mathbb{C}} P^{n-3}$). However these products\ngive, so far, the only known examples of symplectic manifolds with non-Hamiltonian circle actions with discrete fixed point set, and the construction of new examples seems far from trivial. \nThus we ask the following `weaker' question:\n\\begin{question}\\label{q 2}\nLet $(\\mathsf{M},\\omega)$ be a compact, connected symplectic manifold. Are there topological conditions which imply that $(\\mathsf{M},\\omega)$ can only support a Hamiltonian or only a non-Hamiltonian action?\n\\end{question}\nThe answer we give to Question \\ref{q 2} is in terms of the Chern numbers of $(\\mathsf{M},\\omega,S^1)$.\nIt is already known that if $\\mathsf{c}_1$ is torsion in $H^2(\\mathsf{M};{\\mathbb{Z}})$, the manifold cannot support any Hamiltonian circle action (see \\cite[Prop.\\ 4.3]{GPS}, or also Lemma \\ref{Lemma:c1 not torsion}, and \\cite[Lemma 3.8]{T2}). \nThus the analysis we carry out to answer Question \\ref{q 2} is under the hypothesis that $\\mathsf{c}_1$ is not torsion. \nA result of Feldman \\cite{Fe} asserts that the Todd genus $T_n[\\mathsf{M}]$ of $(\\mathsf{M},\\omega,S^1)$ is either $1$ or $0$, and it is zero precisely if the action is non-Hamiltonian.\nAlthough Feldman's result is very strong and gives an answer to Question \\ref{q 2}, computing the Todd genus in high dimensions is difficult, since $T_n$ becomes a complicated combination of Chern classes. \nIn some sense, our results can be regarded as a refinement of Feldman's, since we prove that\n\\emph{given a compact symplectic manifold $(\\mathsf{M},\\omega)$, if certain combinations of Chern numbers vanish, then $(\\mathsf{M},\\omega)$ cannot support any Hamiltonian circle action with isolated fixed points}. These combinations of Chern numbers depend on $\\k0$, and are easier to compute than the Todd genus if $\\k0$ is big enough (see Corollary \\ref{cor non ham 2}). \nFor $\\k0\\geq n-2$, we strengthen the result above by giving the possible values of $\\mathsf{c}_1^n[\\mathsf{M}]$, $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ or a combination of them, these values depending on whether the action is Hamiltonian or not.\nThis is summarized in the following\n \\begin{thm}[{\\bf Hamiltonian vs non-Hamiltonian symplectic $S^1$-actions}]\\label{nHam-char}\n$\\;$\\\\\n\\noindent\nLet $(\\mathsf{M},\\omega)$ be a compact, connected symplectic manifold, and suppose it can be endowed with a symplectic circle action with isolated fixed points.\nLet $\\k0$ be its index. Then:\n\\begin{itemize}\n\\item[(I)] If $\\k0=0$ or $\\k0 > n+1$ the action is non-Hamiltonian and $\\mathsf{c}_1^n[\\mathsf{M}]=\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=0$.\n\\item[(II)] If $\\k0=n+1$ then $(\\mathsf{c}_1^n[\\mathsf{M}],\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}])$ is equal to $\\Big((n+1)^n,\\frac{n(n+1)^{n-1}}{2}\\Big)$ or $(0,0)$. \n\\item[(III)] If $\\k0=n$ then $(\\mathsf{c}_1^n[\\mathsf{M}],\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}])$ is equal to $\\Big( 2n^n,n^{n-2}(n^2-n+2)\\Big)$ or $(0,0)$.\n\\end{itemize}\nMoreover, in \\emph{(II)} and \\emph{(III)} the\naction is Hamiltonian if and only if $\\mathsf{c}_1^n[\\mathsf{M}]\\neq 0$ (or equivalently if and only if $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]\\neq 0$).\n\\begin{itemize}\n\\item[(IV)] If $\\k0=n-1$ and $n\\geq 2$ then \n\\begin{equation}\\label{mm1}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]-\\frac{n(n-3)}{2(n-1)^2}\\mathsf{c}_1^n[\\mathsf{M}]\\quad \\in \\Big\\{0,12 (n-1)^{n-2}\\Big\\}.\n\\end{equation}\n\\item[(V)] If $\\k0=n-2$ and $n\\geq 3$ then \n\\begin{equation}\\label{mm2}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]-\\frac{n-3}{2(n-2)}\\mathsf{c}_1^n[\\mathsf{M}]\\quad \\in \\Big\\{0,24 (n-2)^{n-2}\\Big\\}.\n\\end{equation}\n\\end{itemize}\nMoreover, in \\emph{(IV)} (resp.\\ \\emph{(V)}) the action is Hamiltonian if and only if the combination of Chern numbers in \\eqref{mm1} (resp.\\ \\eqref{mm2}) does not vanish.\n\\end{thm}\n\\begin{rmk}\n\\begin{itemize}\n\\item[(1)] This theorem implies that for $\\k0\\geq n-2$ the Chern numbers $\\mathsf{c}_1^n[\\mathsf{M}]$ and $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ of $(\\mathsf{M},\\omega,S^1)$ are very \\emph{rigid}. Hence it gives necessary conditions for a compact, connected symplectic manifold $(\\mathsf{M},\\omega)$ with $\\k0> \\max\\{n-3,0\\}$ to support a symplectic circle action with isolated fixed points. \n\\item[(2)] Given a compact, connected symplectic manifold $(\\mathsf{M},\\omega)$ of dimension $2n$, Theorem \\ref{nHam-char} implies that\nif the index satisfies $\\k0\\geq n$ and $\\mathsf{c}_1^n[\\mathsf{M}]$ or $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ vanish, then $(\\mathsf{M},\\omega)$ cannot be endowed with \\emph{any} Hamiltonian circle\naction with isolated fixed points. A similar conclusion holds for $\\k0\\in \\{n-2,n-1\\}$, by considering the combinations of Chern numbers in \\eqref{mm1} and \\eqref{mm2}.\n\\item[(3)] The above results are stated in terms of the Chern numbers $ \\mathsf{c}_1^n[\\mathsf{M}]$ and $ \\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$; however similar conclusions \ncan be obtained for $\\mathsf{c}_1^h\\,T_{n-h}[\\mathsf{M}]$, for $h=0,\\ldots,n$ (see Remark \\ref{other comb}).\n\\end{itemize}\n\\end{rmk}\nFinally, in Section \\ref{examples} we analyse the geography problem for $(\\mathsf{M},\\omega,S^1)$ when $n\\leq 4$.\nOne of the goals is to find,\nin the Hamiltonian case,\nformulas for the Chern numbers in terms of $\\k0$ and the Betti numbers of $\\mathsf{M}$. \nFor instance, the geography problem for $n=2$ can be completely solved (Corollary \\ref{geo s}), and for $n=3,4$\nwe solve it for every $\\k0\\geq n$ (Propositions \\ref{dim 6} and \\ref{dim 8}).\nAs a byproduct of the investigation in dimension $8$, we prove that if a compact, connected symplectic manifold\nof dimension $8$ supports a Hamiltonian $S^1$-action with\nisolated fixed points, and if the minimal Chern number is even, then $\\mathsf{c}_2^2[\\mathsf{M}]+2\\, b_2(\\mathsf{M})=98 +b_4(\\mathsf{M})$ (Corollary \\ref{c228h}). \n\n\n\\vspace{.5cm} \n\\textbf{Acknowledgements.}\\\nFirst of all, I would like to thank Leonor Godinho for many fruitful conversations during the time I spent at Instituto Superior T\\'ecnico, for inspiring this work and reading previous drafts. I would also like to thank Hansj\\\"org Geiges for useful discussions, and in particular for suggesting Remark \\ref{mcn}. Frederik von Heymann explained to me many useful facts about reflexive polytopes, and strongly inspired Section \n\\ref{connections ehrhart}.\n\nAlthough I have never met him, I would like to dedicate this work to the memory of Akio Hattori who, through his articles, taught me so much. \n\n\\section{Background and preliminary results}\\label{background}\nThe main purpose of this section is to recall background material, set up notation and state preliminary results needed in the forthcoming sections.\n\nLet $(\\mathsf{M},\\mathsf{J})$ be a compact, connected almost complex manifold of dimension $2n$.\nThus $\\mathsf{J}\\colon T\\mathsf{M} \\to T\\mathsf{M}$ is a complex structure on the tangent bundle of $\\mathsf{M}$, and \nfor such manifold we consider the Chern classes of the tangent bundle, denoted by $\\mathsf{c}_j\\in H^{2j}(\\mathsf{M};{\\mathbb{Z}})$\\footnote{To avoid confusion, if in the same paragraph we also deal with\nChern classes of other bundles, we\nwill denote the Chern \nclasses of the tangent bundle by $\\mathsf{c}_j(\\mathsf{M})$.},\nas well as the Chern numbers \n$ \\mathsf{c}_{j_1}\\cdots \\mathsf{c}_{j_l}[\\mathsf{M}]\\in {\\mathbb{Z}}$, for every partition $(j_1,\\ldots,j_l)$ of $n$, i.e.\\ $j_1+\\ \\cdots \\ +j_l=n$ and $j_m\\in \\mathbb{N}$ for $m=1\\ldots,l$.\n\nMoreover assume that $(\\M,\\J,S^1)$ is an $S^1$-space, i.e.\\ $(\\mathsf{M},\\mathsf{J})$ is endowed with a $\\mathsf{J}$-preserving $S^1$-action with nonempty and discrete fixed point set $\\mathsf{M}^{S^1}=\\{p_0,\\ldots,p_N\\}$, for some $N\\in {\\mathbb{Z}}_{>0}$.\n\n\nFor every $p_i\\in M^{S^1}$ we denote by $w_{i,1},\\ldots,w_{i,n}$ the \\emph{weights} of the (isotropy) action of $S^1$ at $p_i$, i.e.\\\nthe $S^1$ representation induced on $T_p\\mathsf{M}$ is given by\n\\begin{equation}\\label{weights def}\n\\alpha\\cdot(z_1,\\ldots,z_n)=(\\alpha^{w_{i,1}}z_1,\\ldots,\\alpha^{w_{i,n}}z_n)\\;\\quad\\mbox{for every}\\quad \\alpha\\in S^1,\n\\end{equation}\nfor a suitable choice of complex coordinates $(z_1,\\ldots,z_n)$ on $T_p\\mathsf{M}\\simeq {\\mathbb{C}}^n$. We also denote by $W_i$ the (multi)set of weights at $p_i$, i.e.\\;$W_i=\\{w_{i,1},\\ldots,w_{i,n}\\}$.\nNote that $w_{i,j}$ is nonzero for every $i=1\\ldots,N$ and $j=1\\ldots,n$, since the isotropy action commutes with the action on the manifold $\\mathsf{M}$,\nand $\\mathsf{M}^{S^1}$ is discrete.\nFinally, we denote by $\\lambda_i$ the number of negative weights at $p_i\\in M^{S^1}$ and by $N_j$ the number of fixed points with exactly $j$ negative weights, for every $j=0,\\ldots,n$. From \\cite[Proposition 2.6]{Ha} we have that\n\\begin{equation}\\label{NiN}\nN_j=N_{n-j}\\quad \\mbox{for every}\\quad j=0,\\ldots,n\\,. \n\\end{equation}\n\n\nLet $K(\\mathsf{M})$ (resp.\\;$K_{S^1}(\\mathsf{M})$) be the ordinary (resp.\\;$S^1$-equivariant) $K$-theory ring of $\\mathsf{M}$, i.e.\\;the abelian group associated to the\nsemigroup of isomorphism classes of complex vector bundles (resp.\\;complex $S^1$-vector bundles) over $\\mathsf{M}$,\nendowed with the direct sum $\\oplus$ and tensor product $\\otimes$ operation.\nThus in particular\n$K(\\{pt\\})\\simeq {\\mathbb{Z}}$ and $K_{S^1}(\\{pt\\}) \\simeq R(S^1),$\nthe character ring of $S^1$. Henceforth, we identify the latter with the Laurent\npolynomial ring ${\\mathbb{Z}}[t,t^{-1}]$, where $t$ denotes the standard $S^1$-representation. \n\nLet $H_{S^1}^*(\\mathsf{M};{\\mathbb{Z}})$ be the $S^1$-equivariant cohomology of $\\mathsf{M}$ with ${\\mathbb{Z}}$ coefficients; we recall that this is defined to be the\nordinary cohomology of the Borel model, i.e.\\;$\nH_{S^1}^*(\\mathsf{M};{\\mathbb{Z}}):=H^*(\\mathsf{M}\\times_{S^1}S^{\\infty};{\\mathbb{Z}})\\,,\n$ \nwhere $S^{\\infty}$ is the unit sphere in ${\\mathbb{C}}^{\\infty}$. Thus in particular $H_{S^1}^*(\\{pt\\};{\\mathbb{Z}})={\\mathbb{Z}}[x]$, where $x$ has degree $2$.\n\nFinally, let $\\pic(\\mathsf{M})$ (resp.\\;$\\pic_{S^1}(\\mathsf{M})$) be the Picard group of isomorphism classes of complex line bundles (resp.\\;equivariant complex line bundles) over $\\mathsf{M}$.\n\nIn the rest of the section, $\\mathcal{H}(\\cdot)$ (resp.\\;$\\mathcal{H}_{S^1}(\\cdot)$) will either denote the cohomology (resp.\\;equivariant cohomology) ring \nwith ${\\mathbb{Z}}$ coefficients, the $K$-theory (resp.\\;equivariant $K$-theory) ring, or the Picard (resp.\\;equivariant Picard) group.\n\nFor $p\\in \\mathsf{M}^{S^1}$ let $i_p\\colon \\{p\\}\\hookrightarrow \\mathsf{M}$ and $i\\colon \\mathsf{M}^{S^1}\\hookrightarrow \\mathsf{M}$ denote the natural inclusions; since they are equivariant we have the following induced maps:\n$$\ni_p^*\\colon \\mathcal{H}_{S^1}(\\mathsf{M})\\to \\mathcal{H}_{S^1}(\\{p\\})\n$$\nand\n\\begin{equation}\\label{istar}\n i^*=\\bigoplus_{p\\in \\mathsf{M}^{S^1}}i_p^*\\colon \\mathcal{H}_{S^1}(\\mathsf{M})\\to \\mathcal{H}_{S^1}(\\mathsf{M}^{S^1})=\\bigoplus_{p\\in \\mathsf{M}^{S^1}}\\mathcal{H}_{S^1}(\\{p\\})\\;.\n\\end{equation}\nWe denote $i_p^*(K)$ simply by $K(p)$, for every $p\\in \\mathsf{M}^{S^1}$ and $K\\in \\mathcal{H}_{S^1}(\\mathsf{M})$.\n\nObserve that the unique map\n$ \\mathsf{M}\\to \\{pt\\}$ induces maps \n\\begin{equation*}\n\\mathcal{H}_{S^1}(\\{pt\\})\\to \\mathcal{H}_{S^1}(\\mathsf{M})\\quad\\text{and} \\quad \\mathcal{H}(\\{pt\\})\\to \\mathcal{H}(\\mathsf{M}),\n\\end{equation*}\nwhich give $\\mathcal{H}_{S^1}(\\mathsf{M})$ the structure of an $\\mathcal{H}_{S^1}(\\{pt\\})$-module, and $\\mathcal{H}(\\mathsf{M})$ the structure of an $\\mathcal{H}(\\{pt\\})$-module.\n\nFinally, if $e$ denotes the identity element in $S^1$, the inclusion homomorphism \n$\\{e\\}\\hookrightarrow S^1$ induces a restriction map, also called the ``forgetful homomorphism\" \n\\begin{equation}\\label{restriction}\n r_{\\mathcal{H}}\\colon \\mathcal{H}_{S^1}(\\mathsf{M})\\to \\mathcal{H}(\\mathsf{M})\\;.\n\\end{equation}\nWhen $M$ is a point, $r_{\\mathcal{H}}$ coincides with the evaluation at $x=0$ in cohomology, and \n with the evaluation at $t=1$ in $K$-theory and in the Picard group. \n The homomorphism \\eqref{restriction} will be denoted by $r_H$ in cohomology, by $r_K$ in $K$-theory and by $r_{\\pic}$ for the Picard group. \n\n\\subsection{Indices of $K$-theory classes}\\label{subsec: indeces}\nLet \n\\begin{equation}\\label{indK}\n \\ind\\colon K(\\mathsf{M})\\to K(pt)\\simeq{\\mathbb{Z}}\n\\end{equation}\n and \n \\begin{equation}\\label{indKe}\n \\ind_{S^1}\\colon K_{S^1}(\\mathsf{M})\\to K_{S^1}(pt)\\simeq {\\mathbb{Z}}[t,t^{-1}] \n \\end{equation}\n be the index homomorphisms (or $K$-theoretic push forwards) in ordinary and equivariant $K$-theory.\nBy the Atiyah-Singer formula, the index in \\eqref{indK} \ncan be computed as \n\\begin{equation}\\label{AT formula}\n\\ind(V)= \\ch(V)\\ttot [\\mathsf{M}]\\;,\\quad\\mbox{for every}\\quad V\\in K(\\mathsf{M}),\n\\end{equation}\nwhere $\\ch(\\cdot)$ is the Chern character homomorphism $\\ch\\colon K(\\mathsf{M})\\to H^*(\\mathsf{M};\\mathbb{Q})$, and $\\ttot$ is the total Todd class of $\\mathsf{M}$, i.e.\\;the cohomology \nclass in $H^*(M;{\\mathbb{Z}})$ associated to the power series $\\displaystyle\\frac{x}{1-e^{-x}}$. This is a rational combination of Chern classes, and\nthe first terms of $\\ttot$ are given by\n\\begin{equation}\\label{Todd}\n \\ttot=\\sum_{j\\geq 0}T_j=1+\\frac{\\mathsf{c}_1}{2}+\\frac{\\mathsf{c}_1^2+\\mathsf{c}_2}{12}+\\frac{\\mathsf{c}_1\\mathsf{c}_2}{24}+\\frac{-\\mathsf{c}_1^4+4\\mathsf{c}_1^2\\mathsf{c}_2+3\\mathsf{c}_2^2+\\mathsf{c}_1\\mathsf{c}_3-\\mathsf{c}_4}{720}+\\ldots \n\\end{equation}\nwhere $T_j\\in H^{2j}(\\mathsf{M};{\\mathbb{Z}})$ for every $j$. We also recall that the Todd genus $\\td(\\mathsf{M})$ of $\\mathsf{M}$ is given by\n$$\n\\td(\\mathsf{M})= \\ttot[\\mathsf{M}]=T_n[\\mathsf{M}]\\;.\n$$\n\n\nBy the Atiyah-Segal formula \\cite{AS}, \nthe equivariant index \\eqref{indKe} of a class $V\\in K_{S^1}(\\mathsf{M})$ can be computed \nin terms of $i^*(V)$ and the $S^1$ isotropy representation on $T\\mathsf{M}\\rvert_{\\mathsf{M}^{S^1}}$. Since $M^{S^1}$ is discrete, the Atiyah-Segal formula in this case gives \n\\begin{equation}\\label{AS formula}\n\\ind_{S^1}(V)=\\sum_{i=0}^N \\frac{V(p_i)}{\\prod_{j=1}^n (1-t^{-w_{i,j}})}\\,,\\quad\\mbox{for every}\\quad V\\in K_{S^1}(\\mathsf{M})\\;.\n\\end{equation} \nBy \\eqref{AT formula}, \\eqref{AS formula} and the commutativity of the following diagram \n\\begin{equation}\\label{K commutes}\n\\xymatrix{\nK_{S^1}(\\mathsf{M}) \\ar[r]^{r_K} \\ar[d]_{\\ind_{S^1}} & K(\\mathsf{M}) \\ar[d]_{\\ind} \\\\\n {\\mathbb{Z}}[t,t^{-1}] \\ar[r]^{r_K} & {\\mathbb{Z}}.\n } \\\n\\end{equation}\nit follows that for every $V\\in K_{S^1}(M)$ we have\n\\begin{equation}\\label{formula index 2}\n\\left(\\sum_{i=0}^N \\frac{V(p_i)}{\\prod_{j=1}^n (1-t^{-w_{i,j}})}\\right)_{\\rvert_{t=1}}=\nr_K(\\ind_{S^1}(V))=\\ind(r_K(V))=\n \\ch(r_K(V))\\ttot[\\mathsf{M}]\\;.\n\\end{equation}\n\nWe conclude this subsection by recalling the Atiyah-Bott-Berline-Vergne Localization formula \\cite{At,BV}:\n\\begin{theorem}[ABBV Localization formula]\\label{abbv formula}\nLet $\\mathsf{M}$ be a compact oriented manifold endowed with a smooth $S^1$-action.\nGiven $\\mu\\in H_{S^1}^*(\\mathsf{M};\\mathbb{Q})$\n\\begin{equation*}\n\\mu[\\mathsf{M}]= \\sum_{F}\\frac{i_F^*(\\mu)}{e^{S^1}(N_F)}[F]\\;,\n\\end{equation*}\nwhere the sum is over all the fixed-point set components $F$ of the action, and\n$e^{S^1}(N_F)$ is the equivariant Euler class of the normal bundle to $F$.\n\\end{theorem} \n\n\\subsection{Equivariant Chern classes and equivariant complex line bundles}\\label{ecc}\n\nGiven a complex vector bundle $V\\to \\mathsf{M}$, denote by\n$\\mathsf{c}(V)=\\sum_i\\mathsf{c}_i(V)\\in H^*(\\mathsf{M};{\\mathbb{Z}})$ the total Chern class of $V$, and if $V$ is equivariant, by $\\mathsf{c}^{S^1}(V)=\\sum_i\\mathsf{c}_i^{S^1}(V)\\in H^*_{S^1}(\\mathsf{M};{\\mathbb{Z}})$\nthe total equivariant Chern class, i.e.\\;the total Chern class of the bundle\n$V\\times_{S^1}S^{\\infty}\\to \\mathsf{M}\\times_{S^1}S^{\\infty}$.\nIt is easy to check that when $V=T\\mathsf{M}$, if $\\mathsf{c}^{S^1}(\\mathsf{M})$ denotes the total equivariant Chern class \nof the tangent bundle $T\\mathsf{M}$, then \nfor every $p_i\\in M^{S^1}$, $\\mathsf{c}^{S^1}(\\mathsf{M})(p_i)=\\prod_{j=1}^n(1+w_{i,j}x)$, and hence\n$\\mathsf{c}^{S^1}_j(\\mathsf{M})(p_i)=\\sigma_j(w_{i,1},\\ldots,w_{i,n})x^j$, where $\\sigma_j(x_1,\\ldots,x_n)$ denotes the $j$-th elementary polynomial in $x_1,\\ldots,x_n$.\n\nIf $(\\mathsf{M},\\mathsf{J})$ is acted on by a circle $S^1$ preserving the almost complex structure,\nit is a natural question to ask whether a given complex vector bundle $V$ over $\\mathsf{M}$ admits an equivariant extension, i.e.\\;whether the $S^1$-action can be\nlifted to $V$, making the projection $V\\to \\mathsf{M}$ equivariant. \nThis question has been studied in different settings, and \nfor (complex) line bundles $\\mathbb{L}$ it has been completely answered by Hattori and Yoshida \\cite[Theorem 1.1, Corollary 1.2]{HY} (see also \\cite{HL,Mu} and \\cite[Appendix C]{GKS}); \nhere we summarise their main result in a different language.\n\\begin{theorem}[Hattori-Yoshida]\nThe equivariant first Chern class\n\\begin{equation}\\label{isom ce}\n\\mathsf{c}_{1}^{S^1}\\colon \\pic_{S^1}(\\mathsf{M})\\to H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})\n\\end{equation} \nis an isomorphism. As a consequence, \na line bundle $\\mathbb{L}$ admits an equivariant extension if and only if its first Chern class $\\mathsf{c}_{1}^{S^1}(\\mathbb{L})$ is in the image\nof the restriction map\n\n\\begin{equation}\\label{restriction H2}\nr_H\\colon H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})\\to H^2(\\mathsf{M};{\\mathbb{Z}}). \n\\end{equation}\n\\end{theorem}\n\nThe second assertion follows from the commutativity of the following diagram\n$$\n \\xymatrix{ \n\\pic_{S^1}(\\mathsf{M}) \\ar[d]_{r_{\\pic}} \\ar[r]^-{\\mathsf{c}_{1}^{S^1}} & H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})\\ar[d]^{r_H} \\\\\n\\pic(\\mathsf{M}) \\ar[r]^-{\\mathsf{c}_1} & H^2(\\mathsf{M};{\\mathbb{Z}}) \\\\\n}\n$$\nand the fact that the first Chern class map $\\mathsf{c}_1$ on the bottom row is an isomorphism.\n\nMoreover, for any line bundle $\\mathbb{L}$ whose first Chern class is in the image of \\eqref{restriction H2}, which will henceforth be called\n\\emph{admissible},\nall the possible equivariant\nextensions are parametrised by $H^2({\\mathbb{C}} P^{\\infty};{\\mathbb{Z}})\\simeq {\\mathbb{Z}}$. More precisely, given an admissible $\\mathbb{L}$ and two equivariant\nextensions $\\mathbb{L}^{S^1}_1$ and $\\mathbb{L}^{S^1}_2$, there exists $a\\in {\\mathbb{Z}}$ such that $\\mathsf{c}_{1}^{S^1}(\\mathbb{L}^{S^1}_1)-\\mathsf{c}_{1}^{S^1}(\\mathbb{L}^{S^1}_2)=ax$. In particular we have that \n\\begin{equation}\\label{trivial constant}\n\\mbox{\\emph{if}}\\;\\;\\mathbb{L} \\;\\;\\mbox{\\emph{is trivial, then }}\\mathsf{c}_{1}^{S^1}(\\mathbb{L}^{S^1})(p)=ax\\quad\\mbox{\\emph{for every} }p\\in \\mathsf{M}^{S^1},\\mbox{ \\emph{for some} }a\\in {\\mathbb{Z}}.\n\\end{equation}\n\n\nIn \\cite[Lemma 3.2]{Ha}, Hattori proves that if $\\mathbb{L}$ is admissible, and $\\mathbb{L}'$ is such that $\\mathsf{c}_1(\\mathbb{L})=k\\mathsf{c}_1(\\mathbb{L}')$ for some nonzero integer $k$,\nthen $\\mathbb{L}'$ is also admissible; moreover every line bundle whose first Chern class is in $\\tor(H^2(\\mathsf{M};{\\mathbb{Z}}))$, the torsion subgroup of $H^2(\\mathsf{M};{\\mathbb{Z}})$, is admissible.\nAn example of admissible line bundle is given by the determinant line bundle $\\Lambda^n(T\\mathsf{M})$.\nIn fact it is well-known that $\\mathsf{c}_1(\\mathsf{M})$ always admits an equivariant extension, given by the equivariant\nfirst Chern class $\\mathsf{c}_{1}^{S^1}(\\mathsf{M})$. Hence $\\Lambda^n(T\\mathsf{M})$ is admissible, since $\\mathsf{c}_1(\\Lambda^n(T\\mathsf{M}))=\\mathsf{c}_1(\\mathsf{M})$.\nMoreover the trivial bundle is clearly admissible.\n\nLet $\\mathcal{L}$ be the lattice given by $H^2(\\mathsf{M};{\\mathbb{Z}})\/\\tor(H^2(\\mathsf{M};{\\mathbb{Z}}))$ and $$\\pi\\colon H^2(\\mathsf{M};{\\mathbb{Z}})\\to \\mathcal{L}$$ the projection. The following lemma is an immediate consequence of \\cite[Lemma 3.2]{Ha}.\n\\begin{lemma}\\label{line admissible}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space and let $\\mathsf{c}_1$ be the first Chern class of the tangent bundle. \nSuppose that $\\mathsf{c}_1$ is not a torsion element, i.e.\\;$\\pi(\\mathsf{c}_1)\\neq 0$, and let $\\eta$ be a primitive element in $\\mathcal{L}$ such that $\\pi(\\mathsf{c}_1)=k_0\\eta$, for some $k_0\\in {\\mathbb{Z}}\\setminus\\{0\\}$. Then every line bundle $\\mathbb{L}$ \nsuch that $\\pi(\\mathsf{c}_1(\\mathbb{L}))=k\\,\\eta$ is admissible, for every $k\\in {\\mathbb{Z}}$.\n\\end{lemma}\n\nObserve that the {\\bf index} $\\k0$ of $(\\mathsf{M},\\mathsf{J})$, as defined in the introduction, is the same as the largest integer satisfying $\\pi(\\mathsf{c}_1)=\\k0 \\pi(\\eta_0)$, for some non-torsion $\\eta_0\\in H^2(\\mathsf{M};{\\mathbb{Z}})$. Note that, when $\\mathsf{c}_1$ is not torsion, $\\pi(\\eta_0)$ is necessarily primitive in $\\mathcal{L}$.\n\nIn the rest of this note, we will make use of the following {\\bf convention}:\nLet $\\tau$ be an element of $H_{S^1}^2(\\mathsf{M}^{S^1};{\\mathbb{Z}})$; thus $\\tau(p)=a_px\\in H_{S^1}^2(\\{p\\};{\\mathbb{Z}})$, where $a_p\\in {\\mathbb{Z}}$ and $x$ is the generator of $H_{S^1}^2(\\{p\\};{\\mathbb{Z}})=H^2({\\mathbb{C}} P^{\\infty};{\\mathbb{Z}})$.\nFor the sake of simplicity, \\emph{we henceforth identify $\\tau\\in H_{S^1}^2(\\mathsf{M}^{S^1};{\\mathbb{Z}})$ with the map from $M^{S^1}$ to ${\\mathbb{Z}}$ which assigns to $p$ the integer $a_p$.}\n$\\;$\\\\\n\nNote that for every $\\mathbb{L}^{S^1}\\in \\pic_{S^1}(\\mathsf{M})$ and every $p_i\\in M^{S^1}$ \n\\begin{equation}\\label{elb}\n\\mathbb{L}^{S^1}(p_i)=t^{a_i},\\quad \\mbox{where}\\;\\; a_i \\;\\; \\mbox{is the integer given by}\\;\\;\\; \\mathsf{c}_{1}^{S^1}(\\mathbb{L}^{S^1})(p_i).\n\\end{equation}%\n\nIn virtue of the isomorphism \\eqref{isom ce}, given a class $\\tau\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ (resp.\\;$\\tau'\\in H^2(\\mathsf{M};{\\mathbb{Z}})$), we will denote by $\\e{\\tau}$ the isomorphism class of equivariant line\nbundles whose first equivariant Chern class is $\\tau$ (resp.\\;the isomorphism class of line\nbundles whose first Chern class is $\\tau'$).\nWe conclude this section with the following\n\\begin{prop}\\label{symmetries}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space with\n$\\mathsf{M}^{S^1}=\\{p_0,\\ldots,p_N\\}$. Let $\\mathsf{c}_1$ and $\\mathsf{c}_{1}^{S^1}$ be respectively the first Chern class and the equivariant first Chern class of the tangent bundle\nof $\\mathsf{M}$. Then, for every $\\tau\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ we have\n\\begin{equation}\\label{eq index symmetry}\n\\ind_{S^1}(\\e{\\tau})=(-1)^n\\ind_{\\widetilde{S}^1}(\\e{(-\\tau-\\mathsf{c}_1^{\\widetilde{S}^1})})\\,,\n\\end{equation}\nwhere $\\widetilde{S}^1$ is the circle $S^1$ with orientation reversed. \nThus\n\\begin{equation}\\label{index symmetry}\n\\ind(\\e{r_H(\\tau)})=(-1)^n\\ind(\\e{(-r_H(\\tau)-\\mathsf{c}_1)}).\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nBy \\eqref{AS formula} and \\eqref{elb} we have that\n$$\n\\ind_{S^1}(\\e{\\tau})=\\sum_{i=0}^N \\frac{t^{\\tau(p_i)}}{\\prod_{j=1}^n(1-t^{-w_{i,j}})}=\\sum_{i=0}^N\\frac{(-1)^n\\,t^{\\tau(p_i)+w_{i,1}+\\ldots+w_{i,n}}}{\\prod_{j=1}^n(1-t^{w_{i,j}})}=(-1)^n\\ind_{\\widetilde{S}^1}(\\e{(-\\tau-\\mathsf{c}_1^{\\widetilde{S}^1})})\\,,\n$$\nand \\eqref{index symmetry} follows from \\eqref{K commutes}, \\eqref{eq index symmetry} and the fact that $r_H(\\mathsf{c}_{1}^{S^1})=r_H(\\mathsf{c}_1^{\\widetilde{S}^1})=\\mathsf{c}_1$.\n\\end{proof}\n\n\n\n\\section{Computation of equivariant indices}\\label{cei}\n\nIn this section we analyse some properties of the equivariant index\nof an equivariant line bundle $\\mathbb{L}^{S^1}$. In particular we study \nunder which conditions $\\mathbb{L}^{S^1}$ is `\\emph{rigid}', namely when its equivariant index $\\ind_{S^1}(\\mathbb{L}^{S^1})$ is \n$S^1$-invariant, i.e.\\ it belongs to ${\\mathbb{Z}}\\subset {\\mathbb{Z}}[t,t^{-1}]$, and determine what the constant is in terms of the restriction to the fixed points of its equivariant first Chern class: this is the\ncontent of Theorem \\ref{trick}. As a consequence, we derive conditions that ensure the equivariant index of an equivariant line bundle to be zero. \nThis is a generalisation of arguments which had\nalready been used in different ways by several authors, see for example Hattori \\cite[Proposition 2.6]{Ha}, Hirzebruch et al.\\;\\cite[Section 5.7]{Hi}, Li \\cite{L} and \nLi-Liu \\cite[Proposition 2.5]{LL}.\n\nThe rest of the section is devoted to deriving applications of Theorem \\ref{trick} which will be used in the forthcoming sections.\n\nFor every point $p_i\\in M^{S^1}$, we order the isotropy weights $w_{i,1},\\ldots,w_{i,n}$ at $p_i$ in such a way that the first $\\lambda_{i}$ are exactly the negative weights at $p_i$.\nWe define $\\cc_1^+$ and $\\cc_1^-$ in $H_{S^1}^2(\\mathsf{M}^{S^1};{\\mathbb{Z}})$ to be\n\\begin{equation}\\label{cpcm}\n\\cc_1^+(p_i)=w_{i,\\lambda_{i}+1}+\\cdots +w_{i,n}\\;\\;\\;\\quad \\mbox{and} \\quad \\;\\;\\;\\cc_1^-(p_i)=-(w_{i,1}+\\cdots+w_{i,\\lambda_i})\\,. \n\\end{equation}\nFrom the definition it follows that $\\cc_1^+(p_i)\\geq 0$ (resp.\\;$\\cc_1^-(p_i)\\geq 0$) and equality holds if and only if $\\lambda_i=n$ (resp.\\;$\\lambda_i=0$).\nMoreover, if $\\mathsf{c}_{1}^{S^1}$ denotes the equivariant first Chern class of $\\mathsf{M}$, we have that $i^*(\\mathsf{c}_{1}^{S^1})=\\cc_1^+-\\cc_1^-$. \n\n\\begin{defin}\nA class $\\tau\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ is said to be \\emph{dominated} by $\\cc_1^+$ (resp.\\;by $\\cc_1^-$) if $\\tau(p)\\leq \\cc_1^+(p)$ for every $p\\in \\mathsf{M}^{S^1}$ \n(resp.\\;if $-\\tau(p)\\leq \\cc_1^-(p)$ for every $p\\in \\mathsf{M}^{S^1}$). \n\\end{defin}\n\\begin{rmk}\\label{ex 0 and c1}\nIt is easy to check that the classes $\\mathbf{0}$ and $\\mathsf{c}_{1}^{S^1}$ are always dominated by both $\\cc_1^+$ and $\\cc_1^-$. \nMoreover, if $\\tau\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ satisfies $\\tau(p)\\leq 0$ (resp.\\;$\\tau(p)\\geq 0$) for every $p\\in \\mathsf{M}^{S^1}$ then $\\tau$ is dominated by $\\cc_1^+$ (resp.\\;$\\cc_1^-$). \n\\end{rmk}\n\\begin{theorem}\\label{trick}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space with\n$\\mathsf{M}^{S^1}=\\{p_0,\\ldots,p_N\\}$. Let $\\tau$ be an element of $H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ and $\\cc_1^+$, $\\cc_1^-$ defined as above. \nFor every $p\\in \\mathsf{M}^{S^1}$, define $\\delta^+(p)$ (resp.\\;$\\delta^-(p)$) to be $1$ if $\\tau(p)=\\cc_1^+(p)$ (resp.\\;$-\\tau(p)=\\cc_1^-(p)$) and zero otherwise.\nThen\n\\begin{itemize}\n \\item[(i)] If $\\tau\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ is dominated by $\\cc_1^+$ then \n $$\n \\ind_{S^1}(\\e{(-\\tau)})=\\sum_{j\\geq 0}b_jt^j\\in {\\mathbb{Z}}[t],\\quad \\mbox{and}\\quad b_0= \\sum_{i=0}^N\\delta^+(p_i)(-1)^{n-\\lambda_i}\n $$\n \\item[(ii)] If $\\tau\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ is dominated by $\\cc_1^-$ then \n $$\n \\ind_{S^1}(\\e{(-\\tau)})=\\sum_{j\\leq 0}b_jt^j\\in {\\mathbb{Z}}[t^{-1}],\\quad \\mbox{and}\\quad b_0= \\sum_{i=0}^N\\delta^-(p_i)(-1)^{\\lambda_i}\n $$\n \\item[(iii)] If $\\tau\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ is dominated by $\\cc_1^+$ and $\\cc_1^-$ then \n \\begin{equation}\\label{index integer}\n \\ind_{S^1}(\\e{(-\\tau)})=b_0\\in {\\mathbb{Z}}\n \\end{equation}\nwhere \n \\begin{equation}\\label{index precise}\nb_0=\\sum_{i=0}^N\\delta^+(p_i)(-1)^{n-\\lambda_i}= \\sum_{i=0}^N\\delta^-(p_i)(-1)^{\\lambda_i}.\n \\end{equation}\n\\end{itemize}\n\n\\end{theorem}\n\\begin{proof}\nBy \\eqref{AS formula} and \\eqref{elb}, we have that \n \\begin{equation}\\label{ei1}\n \\ind_{S^1}(\\e{(-\\tau)}) =\\sum_{i=0}^N \\frac{ t^{-\\tau(p_i)} }{\\prod_{j=1}^n(1-t^{-w_{i,j}}) }\n\\end{equation} \nFor every $i=0,\\ldots,N$, let $f_i(t)$ be the rational function $\\displaystyle \\frac{ t^{-\\tau(p_i)} }{\\prod_{j=1}^n(1-t^{-w_{i,j}}) }$, and observe that\n$\\sum_{i=0}^Nf_i(t)\\in {\\mathbb{Z}}[t,t^{-1}]$.\nThus, in order to prove (i), it is sufficient to prove that $\\lim_{t\\to 0}\\sum_{i=0}^Nf_i(t)$ is finite, and its value will be equal to $b_0$. Observe that by definition\nof $\\cc_1^+$, $f_i(t)$ can be rewritten as $\\displaystyle \\frac{(-1)^{n-\\lambda_i}\\;t^{-\\tau(p_i)+\\cc_1^+(p_i)}}{\\prod_{j=1}^n(1-t^{|w_{i,j}|})}$.\nSince by assumption $i^*(\\tau)$ is dominated by $\\cc_1^+$, $\\lim_{t \\to 0}f_i(t)$ is finite for all $i=0,\\ldots,N$, and by definition of $\\delta^+$ it follows that \nits value equals to $\\delta^+(p_i)(-1)^{n-\\lambda_i}$, thus proving (i).\n\nThe proof of (ii) follows by a similar argument, by taking $\\lim_{t\\to \\infty}\\sum_{i=0}^Nf_i(t)$, and by observing that $f_i(t)$ can be written as\n$ \\displaystyle \\frac{(-1)^{\\lambda_i}\\;t^{-\\tau(p_i)-\\cc_1^-(p_i)}}{\\prod_{j=1}^n(1-t^{-|w_{i,j}|})}$.\n\nFinally, (iii) follows from (i) and (ii).\n\\end{proof}\n\n\\begin{exm}\\label{exm:CP3}\nConsider $({\\mathbb{C}} P^3,\\mathsf{J})$ with the standard (almost) complex structure, and $S^1$-action given by \n$$\n\\lambda \\cdot [z_0:z_1:z_2:z_3]=[z_0:\\lambda^a z_1:\\lambda^{a+b}z_2:\\lambda^{a+b+c}z_3],\n$$\nwhere $a,b,c$ are pairwise coprime positive integers. This action is ``standard'', in the sense that it is the\nrestriction to a subtorus of dimension $1$ of the standard toric action of the $3$-dimensional torus $\\mathbb{T}^3$ on ${\\mathbb{C}} P^3$.\nThe fixed point set is given by four points $p_0,p_1,p_2,p_3$, corresponding respectively to $[1:0:0:0],[0:1:0:0],[0:0:1:0],[0:0:0:1]$. \nLet $\\tau_0$ be the generator of $H^2({\\mathbb{C}} P^3,{\\mathbb{Z}})$ such that $\\mathsf{c}_1({\\mathbb{C}} P^3)=4\\, \\tau_0$. It can be checked that $\\tau_0$ admits an equivariant\nextension\\footnote{Indeed, in this case, every class $\\gamma\\in H^j({\\mathbb{C}} P^3,{\\mathbb{Z}})$ admits an equivariant extension, for every $j$. This is due to the fact\nthat ${\\mathbb{C}} P^3$ with the above $S^1$-action is \\emph{equivariantly formal} (see for example \\cite{Ki}).} $\\tau\\in H^2_{S^1}({\\mathbb{C}} P^3,{\\mathbb{Z}})$, i.e.\\;$r_H(\\tau)=\\tau_0$; we pick\n$\\tau$ so that $\\tau(p_0)=0$. \nThe (multi)sets of isotropy weights at each fixed point, as well as $i^*(\\tau)$, $\\cc_1^+$ and $\\cc_1^-$, are given in the following table:\n\\begin{center}\n\\begin{tabular}{|l|| l|l|l|l|}\n\\hline\n & $\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;\\;W_i$ & $\\;\\;\\;\\;\\;i^*(\\tau)$ & $\\;\\;\\;\\;\\;\\;\\;\\cc_1^+$ & $\\;\\;\\;\\;\\;\\;\\;\\cc_1^-$ \\\\ \\hline \n$p_0:$ & $\\{a,a+b,a+b+c\\}$ & $0$ & $3a+2b+c$ & $0$ \\\\ \\hline\n$p_1:$ & $\\{-a,b,b+c\\}$ & $-a$ & $2b+c$ & $a$ \\\\ \\hline \n$p_2:$ & $\\{-b,-a-b,c\\}$ & $-a-b$ & $c$ & $a+2b$ \\\\ \\hline \n$p_3$ & $\\{-c,-b-c,-a-b-c\\}$ & $-a-b-c$ & $0$ & $a+2b+3c$ \\\\ \\hline \n\\end{tabular}\n\\end{center} \nObserve that $\\tau$ is dominated by both $\\cc_1^+$ and $\\cc_1^-$, and by definition $\\delta^+\\equiv 0$. Thus Theorem \\ref{trick} (iii) implies that $\\ind_{S^1}(\\e{(-\\tau)})=0$, as it\ncan also be checked directly from here \n\\begin{align*}\n\\ind_{S^1}(\\e{(-\\tau)})= &\\frac{1}{(1-t^{-a})(1-t^{-a-b})(1-t^{-a-b-c})}+\\frac{t^a}{(1-t^{a})(1-t^{-b})(1-t^{-b-c})}\\\\\n &+\\frac{t^{a+b}}{(1-t^{b})(1-t^{a+b})(1-t^{-c})}+\n\\frac{t^{a+b+c}}{(1-t^{c})(1-t^{b+c})(1-t^{a+b+c})}=0\\\\\n\\end{align*}\n\n \n\n\\end{exm}\n\n\n\\begin{rmk}\\label{index positive or negative}\nFollowing the discussion in Remark \\ref{ex 0 and c1}, by Theorem \\ref{trick} we have that if $\\tau\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ satisfies $\\tau(p)\\geq 0$ (resp.\\;$\\tau(p)\\leq 0$)\nfor all $p\\in \\mathsf{M}^{S^1}$, then $\\ind_{S^1}(\\e{\\tau})\\in {\\mathbb{Z}}[t]$ (resp.\\;$\\ind_{S^1}(\\e{\\tau})\\in {\\mathbb{Z}}[t^{-1}]$).\n\\end{rmk}\n\nAs an immediate consequence of Theorem \\ref{trick}, we have the following\n\\begin{corollary}\\label{index 0 and -c1}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space with\n$\\mathsf{M}^{S^1}=\\{p_0,\\ldots,p_N\\}$. Let $N_i$ be the number of fixed points with exactly $i$ negative weights.\n\nIf $\\mathbf{1}\\in \\pic_{S^1}(\\mathsf{M})$ denotes the trivial line bundle over $\\mathsf{M}$, where $\\mathsf{c}_{1}^{S^1}(\\mathbf{1})=\\mathbf{0}$, then \n\\begin{equation}\\label{index 0}\n\\ind_{S^1}(\\mathbf{1})=N_0=N_n\\;.\n\\end{equation}\nIf $\\widetilde{\\mathbb{L}}^{S^1}\\in \\pic_{S^1}(\\mathsf{M})$ denotes the determinant line bundle $\\Lambda^n(T^*\\mathsf{M})$, where $\\mathsf{c}_{1}^{S^1}(\\widetilde{\\mathbb{L}}^{S^1})=\\mathsf{c}_{1}^{S^1}(\\Lambda^n(T^*\\mathsf{M}))=-\\mathsf{c}_{1}^{S^1}$, then\n\\begin{equation}\\label{index -c1}\n\\ind_{S^1}(\\widetilde{\\mathbb{L}}^{S^1})=(-1)^nN_0=(-1)^nN_n\\;.\n\\end{equation}\n\\end{corollary}\n\n\\begin{proof}\nAs we have already remarked, the classes $\\mathbf{0}$ and $\\mathsf{c}_{1}^{S^1}$ are dominated by $\\cc_1^+$ and $\\cc_1^-$. Thus \\eqref{index 0}\nand \\eqref{index -c1} follow from Theorem \\ref{trick} (iii) and the definition of $N_0$ and $N_n$. \n\\end{proof}\nNote that equation \\eqref{index 0} is already known, see for example \\cite[Corollary 2.7]{Ha} (also see \\cite[Theorem 2.3]{L}). \n\nObserve that \\eqref{index -c1} can also be obtained by noticing that since $\\mathsf{c}_{1}^{S^1}$ is dominated by $\\cc_1^+$ and $\\cc_1^-$, \n\\eqref{index integer} implies that \n$\\ind_{S^1}(\\widetilde{\\mathbb{L}}^{S^1})$ is an integer, thus $\\ind_{S^1}(\\widetilde{\\mathbb{L}}^{S^1})=\\ind(r_K(\\widetilde{\\mathbb{L}}^{S^1}))$, \nand so \\eqref{index -c1} follows from \\eqref{index symmetry} in Proposition \\ref{symmetries} and \\eqref{index 0}.\n\nWe also remark that $\\ind_{S^1}(\\mathbf{1})$ is the Todd genus of $\\mathsf{M}$; in fact\nfrom \\eqref{formula index 2} we have that \n\\begin{equation}\\label{todd genus}\n\\td(\\mathsf{M})= T_n[\\mathsf{M}]= \\ch(r_K(\\mathbf{1})) \\ttot[\\mathsf{M}]=\\ind(r_K(\\mathbf{1}))=\\ind_{S^1}(\\mathbf{1})\\,\n\\end{equation}\nwhere the second equality follows from observing that $\\ch(r_K(\\mathbf{1}))=1$, and the\nlast equality follows from \\eqref{K commutes} and the fact that $\\ind_{S^1}(\\mathbf{1})$ is an integer, thus $\\ind(r_K(\\mathbf{1}))=r_K(\\ind_{S^1}(\\mathbf{1}))=\\ind_{S^1}(\\mathbf{1})$.\nBy combining \\eqref{index 0} and \\eqref{todd genus} we recover the following well-known fact (see \\cite[Remark 2.10]{Ha} and \\cite{Fe}).\n\\begin{corollary}\\label{todd genus comp}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space,\n$N_i$ the number of fixed points with exactly $i$ negative weights, and $\\td(\\mathsf{M})$ the Todd genus of $\\mathsf{M}$. Then\n$$\n\\td(\\mathsf{M})=N_0=N_n.\n$$\n\\end{corollary}\nBefore giving the main application of Theorem \\ref{trick}, we prove the following easy but useful lemma. \n\\begin{lemma}\\label{c1 N0}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space, $\\mathsf{c}_1$ the first Chern class of the tangent bundle of $\\mathsf{M}$, $N_i$ the number of fixed points with exactly $i$ negative weights, and $\\td(\\mathsf{M})$ the Todd genus of $\\mathsf{M}$.\n\\begin{itemize}\n \\item[(a1)] If $\\eta\\in \\tor(H^2(\\mathsf{M},{\\mathbb{Z}}))$ then \n\\begin{equation}\\label{index torsion}\n\\ind(\\e{\\eta})=\\td(\\mathsf{M})=N_0\n\\end{equation}\nand\n \\begin{equation}\\label{torsion index}\n \\ind_{S^1}(\\e{\\eta^{S^1}})=t^{a}\\td(\\mathsf{M})=t^{a}N_0\\,,\n \\end{equation}\n where $\\eta^{S^1}\\in H_{S^1}^2(\\mathsf{M},{\\mathbb{Z}})$ denotes an equivariant extension of $\\eta$, and $a=\\eta^{S^1}(p)$ for every $p\\in \\mathsf{M}^{S^1}$.\\\\\n\\item[(a2)] If $\\mathsf{c}_1\\in \\tor(H^2(\\mathsf{M},{\\mathbb{Z}}))$ then $N_0=N_n=0$ and $\\td(\\mathsf{M})=0$.\n\\end{itemize}\n\n\\end{lemma}\n\\begin{proof}\n(a1) First of all, observe that if $\\eta\\in \\tor(H^2(\\mathsf{M},{\\mathbb{Z}}))$ then, by the discussion in Section \\ref{ecc}, it admits an equivariant extension $\\eta^{S^1}\\in H^2_{S^1}(\\mathsf{M},{\\mathbb{Z}})$. \nBy the commutativity of \\eqref{K commutes}, in order to prove \\eqref{index torsion} it is sufficient to prove \\eqref{torsion index}.\nIf $\\eta$ is torsion then there exists $k\\in {\\mathbb{Z}}\\setminus\\{0\\}$ such that $k\\eta=0$. Thus if we consider an equivariant extension \n$\\eta^{S^1}$, by \\eqref{trivial constant} we have that $\\eta^{S^1}(p)=a$ for some $a\\in {\\mathbb{Z}}$, for every $p\\in \\mathsf{M}^{S^1}$. Hence \n$$\n\\ind_{S^1}(\\e{\\eta^{S^1}})=t^{a}\\ind_{S^1}(\\mathbf{1})=t^{a}\\td(\\mathsf{M})=t^aN_0\n$$\nwhere the first equality follows from \\eqref{AS formula},\nthe second from \\eqref{todd genus}, and the last from Corollary \\ref{todd genus comp}.\n\n(a2)\nBy a similar argument, we have that the integer \n $\\mathsf{c}_{1}^{S^1}(p)$ does not depend on $p\\in M^{S^1}$. However $\\mathsf{c}_{1}^{S^1}(p_i)=\\sum_{j=1}^nw_{i,j}$, and by \\eqref{NiN} we have $N_0=N_n$.\nSo by definition of $N_0$ and $N_n$ we must have that\n$N_0=N_n=0$, and by Corollary \\ref{todd genus comp} that $\\td(\\mathsf{M})=0$.\n\n\\end{proof}\n\n\nThe next proposition also follows from Theorem \\ref{trick}, but it is a key result for the theorems in the next sections (see also \\cite[Assertion 4.10]{Ha} and \\cite[Proposition 2.5]{LL}).\n\\begin{prop}\\label{eq index zero}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space. Let $\\mathsf{c}_{1}^{S^1}$ be the equivariant first Chern class of the tangent bundle of $\\mathsf{M}$ and $k$ a positive integer such that \n$\\mathsf{c}_{1}^{S^1}(p)=k\\,\\eta^{S^1}(p)+c$ for all $p\\in \\mathsf{M}^{S^1}$, for some $\\eta^{S^1}\\in H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$ and $c\\in {\\mathbb{Z}}$.\nThen\n\\begin{equation}\\label{index 0 1..k0}\n \\ind_{S^1}(\\e{(-h\\eta^{S^1})})=0\\quad\\mbox{for every}\\quad h=1,\\ldots,k-1\\;.\n\\end{equation}\n\n\\end{prop}\n\\begin{rmk}\nObserve that if $\\mathsf{c}_1$ is torsion then $r_H(\\eta^{S^1})$ is also torsion, and by Lemma \\ref{c1 N0} it follows that \n \\begin{equation}\\label{index 0 always}\n \\ind_{S^1}(\\e{(-h\\eta^{S^1})})=0\\quad\\mbox{for every}\\quad h\\in {\\mathbb{Z}}\n \\end{equation}\n \\end{rmk}\n\\begin{proof}[Proof of Proposition \\ref{eq index zero}]\nFirst of all, observe that it is not restrictive to assume that $c=0$. In fact,\nlet $S^1\\times \\mathsf{M}\\to \\mathsf{M}$, $(\\lambda,q)\\to \\lambda\\cdot q$ be the given $S^1$-action on $\\mathsf{M}$, and consider \na new action given by $(\\lambda,q)\\to \\lambda^k\\cdot q$; we denote by $\\widetilde{S}^1$ the new circle acting on $\\mathsf{M}$.\nNote that the set of fixed points of this action coincides with the old one,\nand the new isotropy weights are the old ones multiplied by $k$. Thus \n $\\mathsf{c}_1^{\\widetilde{S}^1}(p)$ is divisible by $k$, for every $p\\in \\mathsf{M}^{\\widetilde{S}^1}=\\mathsf{M}^{S^1}$.\nSo there exists $\\widetilde{\\eta}\\in H_{\\widetilde{S}^1}^2(\\mathsf{M};{\\mathbb{Z}})$ such that\n$\\mathsf{c}_1^{\\widetilde{S}^1}=k\\widetilde{\\eta}$. Moreover, if $\\ind_{S^1}(\\e{\\eta^{S^1}})=P(t,t^{-1})$ for some $P\\in {\\mathbb{Z}}[x,y]$, then \n$\\ind_{\\widetilde{S}^1}(\\e{\\widetilde{\\eta}})=t^bP(t^k,t^{-k})$, for some $b\\in {\\mathbb{Z}}$. \nThus $\\ind_{S^1}(\\e{\\eta^{S^1}})=0$ if and only if $\\ind_{\\widetilde{S}^1}(\\e{\\widetilde{\\eta}})=0$.\nHence we can assume that $\\mathsf{c}_{1}^{S^1}(p)=k\\,\\eta^{S^1}(p)$ for all $p\\in \\mathsf{M}^{S^1}$.\n\nNotice that for all $p\\in \\mathsf{M}^{S^1}$ such that $\\eta^{S^1}(p)>0$ and all $h=1,\\ldots,k-1$, we have\n\\begin{equation}\\label{keta}\nh\\,\\eta^{S^1}(p)0$, and since $\\cc_1^+(p)$ is always nonnegative,\n$\\delta^+(p)=0$ for all $p\\in \\mathsf{M}^{S^1}$ such that $\\eta^{S^1}(p)\\neq 0$. \nFinally observe that if $\\eta^{S^1}(p)=\\cc_1^+(p)=0$, then $\\mathsf{c}_{1}^{S^1}(p)=0$ and $\\cc_1^-(p)=0$; however this is impossible, unless\n$\\dim(\\mathsf{M})=0$. So we can conclude that $\\delta^+(p)=0$ for all $p\\in \\mathsf{M}^{S^1}$.\n\nA similar argument shows that $h\\,\\eta^{S^1}$ is dominated by $\\cc_1^-$ for all $h=1,\\ldots,k-1$ (and $\\delta^-(p)=0$ for all $p\\in \\mathsf{M}^{S^1}$).\nSo the conclusion follows from Theorem \\ref{trick} (iii). \n\\end{proof}\n\n\\subsection{Symplectic manifolds} Suppose that $(\\mathsf{M},\\omega)$ is a compact, connected symplectic manifold endowed with a symplectic circle action with isolated fixed points. We recall that this triple is denoted by $(\\mathsf{M},\\omega,S^1)$. \nThe following lemma is a key fact to translate our results in the almost complex category to the symplectic category.\n\\begin{lemma}[\\cite{MD1}]\\label{N0 1}\nGiven $(\\mathsf{M},\\omega,S^1)$, \nthen $N_0$ can be either $0$ or $1$, and is $1$ exactly if the action is Hamiltonian.\n\\end{lemma}\n If the action is Hamiltonian, then $N_0$ coincides indeed with the number of points of minima of the moment map $\\psi$, which\n is $1$ because $\\psi$ is a Morse function with only even indices, and $\\mathsf{M}$ is assumed to be connected. \nMore in general, the equivariant perfection of $\\psi$ (see \\cite{Ki}) implies that\n\\begin{equation}\\label{bi=Ni}\nb_{2j}(\\mathsf{M})= N_j \\quad \\mbox{for every}\\quad j=0,\\ldots,n\\,, \n\\end{equation}\nwhere $b_{2j}(\\mathsf{M})$ denotes the $2j$-th Betti number of $\\mathsf{M}$. The following fact is a consequence of the results of this section:\n\\begin{lemma}\\label{Lemma:c1 not torsion}\nGiven $(\\mathsf{M},\\omega,S^1)$, if the action is Hamiltonian then $\\mathsf{c}_1$ is not a torsion\nelement in $H^2(\\mathsf{M};{\\mathbb{Z}})$. \n\\end{lemma}\n\\begin{proof}\nIt is sufficient to combine Lemma \\ref{N0 1} with Lemma \\ref{c1 N0} (a2). \n\\end{proof}\n\\begin{rmk}\\label{mcn}\nLet $(\\mathsf{M},\\omega)$ be a compact symplectic manifold with first Chern class $\\mathsf{c}_1$, and suppose it is not torsion.\nFollowing Definition 6.4.2 in \\cite{MDS}, the \\emph{minimal Chern number} of $(\\mathsf{M},\\omega)$ is defined to be the integer $N$ such that $\\langle \\mathsf{c}_1,\\pi_2(\\mathsf{M})\\rangle = N {\\mathbb{Z}}$.\nIf $\\mathsf{M}$ is simply connected then, by the Hurewicz theorem, we have $\\pi_2(\\mathsf{M})=H_2(\\mathsf{M},{\\mathbb{Z}})$ which, modulo torsion, is isomorphic to $H^2(\\mathsf{M},{\\mathbb{Z}})$, thus implying that the minimal Chern number agrees with the index of $(\\mathsf{M},\\omega)$. A result of Li \\cite{Li2} implies that \n if the $S^1$-action on $(\\mathsf{M},\\omega)$ is Hamiltonian with isolated fixed points then $\\mathsf{M}$ is simply connected. So it follows that\n if $(\\mathsf{M},\\omega)$ is endowed with a Hamiltonian $S^1$-action with isolated fixed points, the minimal Chern number always agrees with the index $\\k0$, which is not zero by Lemma \\ref{Lemma:c1 not torsion}. \\end{rmk}\n\n\\section{The Hilbert polynomial of $(\\mathsf{M},\\mathsf{J})$ and the equations in the Chern numbers}\\label{equations chern}\nWe recall from Section \\ref{ecc} that $\\mathcal{L}$ is the lattice given by $H^2(\\mathsf{M};{\\mathbb{Z}})\/\\tor(H^2(\\mathsf{M};{\\mathbb{Z}}))$ and $\\pi$ the projection $\\pi\\colon H^2(\\mathsf{M};{\\mathbb{Z}})\\to \\mathcal{L}$.\nIf $\\mathsf{c}_1$ is not torsion we have $\\pi(\\mathsf{c}_1)\\neq 0$, so there exists a non-torsion element $\\eta_0\\in H^2(\\mathsf{M};{\\mathbb{Z}})$ such that $\\pi(\\mathsf{c}_1)=\\k0\\,\\pi(\\eta_0)$.\nThe index $\\k0$, and when $\\k0>0$ the associated $\\eta_0\\in H^2(\\mathsf{M};{\\mathbb{Z}})$ (uniquely defined up to torsion), will play a crucial role in the rest of the section.\n\nBefore proceeding, we prove the following Lemma:\n\\begin{lemma}\\label{index torsion independent}\nLet $\\eta\\in H^2(\\mathsf{M};{\\mathbb{Z}})$ and $\\tau\\in \\tor(H^2(\\mathsf{M};{\\mathbb{Z}}))$. Then \n$$\n\\ind(\\e{(\\eta+\\tau)})=\\ind(\\e{\\eta})\\,.\n$$\n\\end{lemma}\n\\begin{proof}\nBy \\eqref{AT formula} we have that \n\n\\begin{align*}\n\\ind(\\e{(\\eta+\\tau)})& = \\ch(\\e{(\\eta+\\tau)})\\ttot[\\mathsf{M}]= \\left(1+\\eta+\\frac{\\eta^2}{2}+\\cdots\\right)\\left(1+\\tau+\\frac{\\tau^2}{2}+\\cdots\\right)\\ttot[\\mathsf{M}]=\\\\\n & = \\left(1+\\eta+\\frac{\\eta^2}{2}+\\cdots\\right)\\ttot[\\mathsf{M}]=\\ind(\\e \\eta)\\,,\n\\end{align*}\nwhere the second-last equality follows from the fact that if $\\tau$ is torsion then $ \\tau^k\\alpha[\\mathsf{M}]=0$ for all $k>0$ and $\\alpha\\in H^{2n-2k}(\\mathsf{M};{\\mathbb{Z}})$.\n\\end{proof}\n\nIn the rest of the section \\emph{we assume that $\\mathsf{c}_1$ is not torsion}. Let $\\eta_0\\in H^2(\\mathsf{M};{\\mathbb{Z}})$ be such that $\\pi(\\mathsf{c}_1)=\\k0 \\pi(\\eta_0)$. Even if $\\eta_0$\nis not uniquely defined, by Lemma \\ref{index torsion independent} the topological index $\\ind(\\e{\\eta})$ is independent on $\\eta\\in \\pi^{-1}(\\pi(\\eta_0))$. \nHence, given $(\\mathsf{M},\\mathsf{J})$ with $\\mathsf{c}_1$ not torsion, for every $k\\in {\\mathbb{Z}}$ the following integer\n\\begin{equation}\\label{polynomial}\n\\Hi(k)=\\ind(\\e{\\,k\\, \\eta_0})\n\\end{equation}\ndoes not depend on the choice of $\\eta_0$. Moreover,\nby \\eqref{AT formula} we obtain that\n\\begin{equation}\\label{HAT}\n\\Hi(k)= \\Big( \\sum_{h\\geq 0} \\frac{(k\\,\\eta_0)^h}{h!}\\Big)\\ttot[\\mathsf{M}]= \\sum_{h=0}^n k^h\\left( \\frac{\\mathsf{c}_1^h\\,T_{n-h}}{\\k0^h\\,h!}\\right)[\\mathsf{M}]\n\\end{equation}\nthus implying that, if $(\\mathsf{M},\\mathsf{J})$ has dimension $2n$, $\\Hi(k)$ is a polynomial in $k$ of degree at most $n$. \nThe polynomial $\\Hi(z)$ defined as\n\\begin{equation}\\label{Hilbert pol}\n\\Hi(z)= \\sum_{h=0}^n a_h z^h=\\sum_{h=0}^n \\left( \\frac{\\mathsf{c}_1^h\\,T_{n-h}}{\\k0^h\\,h!}[\\mathsf{M}]\\right)z^h, \\quad z\\in {\\mathbb{C}} \n\\end{equation}\nwill be referred to as\nthe \\emph{Hilbert polynomial of $(\\mathsf{M},\\mathsf{J})$}. \nThus \n\\begin{align}\n& a_n= \\frac{1}{\\k0^n\\,n!} \\mathsf{c}_1^n[\\mathsf{M}], \\;\\;\\;\\;\\;\\;a_{n-1}=\\frac{1}{2\\k0^{n-1}(n-1)!}\\mathsf{c}_1^n[\\mathsf{M}],\\nonumber \\\\\n\\label{ah} & a_{n-2}= \\frac{1}{12\\k0^{n-2}(n-2)!}(\\mathsf{c}_1^n+\\mathsf{c}_1^{n-2}\\mathsf{c}_2)[\\mathsf{M}]\\,, \\;\\;\\;\\;\n\\ldots\\\\\n& a_0= T_n[\\mathsf{M}] = \\td(\\mathsf{M}) \\nonumber\n\\end{align}\nThe first properties of $\\Hi(z)$ are given in the following \n\\begin{prop}\\label{properties P}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space with $N_0$ fixed points with zero negative weights.\nLet $\\mathsf{c}_1$ be the first Chern class of the tangent bundle of $\\mathsf{M}$ and assume that it is not torsion. Let $\\k0\\geq 1$ be the index of $(\\mathsf{M},\\mathsf{J})$,\n$\\Hi(z)$ the Hilbert polynomial, and $\\deg(\\Hi)$ its degree.\nThen \n\\begin{enumerate}\n \\item\\label{1a} $\\Hi(0)=\\td(\\mathsf{M})=N_0$;\\\\\n \\item\\label{3a} $\\Hi(z)=(-1)^n \\Hi(-\\k0-z)\\;\\;\\;$ for every $\\;\\;z\\in {\\mathbb{C}}$;\\\\\n \\item\\label{4a} $\\deg(\\Hi)\\equiv n \\mod 2$.\n\\end{enumerate}\n\n\\end{prop}\n\n\\begin{rmk}\\label{H Ham}\nBy Lemma \\ref{N0 1} and Proposition \\ref{properties P} \\eqref{1a}, note that if $(M,\\omega)$ is a compact symplectic manifold supporting\na Hamiltonian $S^1$-action with isolated fixed points, then the Hilbert polynomial $\\Hi(z)$ can never be identically zero.\n\\end{rmk}\n\n\\begin{rmk}\\label{properties Ch}\nProposition \\ref{properties P} \\eqref{4a} implies that if there exists $k$ such that $a_{n-2h}=0$ for every $h=0,\\ldots,k$, then\n$a_{n-2h-1}=0$ for every $h=0,\\ldots,k$.\n\\end{rmk}\n\n\\begin{proof}\nProperty \\eqref{1a} follows from the definition of $\\Hi(z)$ and Corollary \\ref{todd genus comp}. \nBy Lemma \\ref{line admissible}, every line bundle $\\mathbb{L}$ such that $\\pi(\\mathsf{c}_1(\\mathbb{L}))=k\\,\\pi(\\eta_0)$ is admissible. So from\nProposition \\ref{symmetries} we have that for all $k\\in {\\mathbb{Z}}$\n$$\n\\Hi(k)=\\ind(\\e{\\,k\\, \\eta_0})=(-1)^n \\ind(\\e{((-k-\\k0) \\eta_0)})=(-1)^n\\Hi(-\\k0-k)\\,,\n$$\nand \\eqref{3a} follows from observing that the polynomial given by $Q(z)=\\Hi(z)-(-1)^n \\Hi(-\\k0-z)$ is zero for all $k\\in {\\mathbb{Z}}$, hence it must be identically zero.\n\nIn order to prove \\eqref{4a} it is sufficient to notice that, if $\\Hi(z)=\\sum_{j=0}^ma_mz^m $, with $m=\\deg(\\Hi)$, from \\eqref{3a} it follows that\n$a_m=(-1)^{m+n}a_m$.\n\\end{proof}\nBefore proceeding with the main results of the section, we introduce some terminology that will be used in the discussion of the position of the roots of\n$\\Hi(z)$.\n\\begin{defin}\\label{def: RVGo}\nFix a positive integer $k$.\n\\begin{itemize}\n\\item[1)] We denote by $\\mathcal{T}_k$ the family of polynomials in $\\mathbb{R}[z]$ that can be written as $C(z)\\prod_{j=1}^{k-1}(z+j)$, where\n $C(z)\\in \\mathbb{R}[z]$ has all its roots on the line $l_{k}=\\{x+\\mathrm{i}y\\in {\\mathbb{C}}\\mid x=-\\frac{k}{2}\\}$. \n \\item[2)] We define $\\mathcal{S}_{k}$ to be the subset of the complex plane given by $$\\mathcal{S}_{k}= \\{x+\\mathrm{i}y\\in {\\mathbb{C}} \\mid -k 0}$. \nIn Sect.\\;\\ref{sec: generating fct} and Section \\ref{sec: values k0} we explore connections \namong our results and those in \\cite{RV}: we study under which conditions $\\Hi(z)$ belongs to $\\mathcal{T}_{\\k0}$, for certain values of $\\k0$.\nIn \\cite{Go}, Golyshev analyses the position of the roots of the Hilbert polynomial of a Fano variety and a variety of general type. In particular, after adapting\nhis terminology to ours, he asks under which conditions all\nthe zeros of $\\Hi(z)$ belong to the canonical strip $\\mathcal{S}_{\\k0}$.\nIn Section \\ref{sec: values k0} we will study the position of the roots of $\\Hi(z)$ in terms of inequalities in the Chern numbers and of $\\k0$, when $\\k0\\geq n-2$ (see Remarks \\ref{pos roots n+1}, \\ref{pos roots n} and Corollaries \\ref{pos roots n-1} and \\ref{pos roots n-2}).\n\nThe next corollary is a straightforward consequence of Proposition \\ref{properties P}. \n\\begin{corollary}\\label{property roots} \nLet $(\\M,\\J,S^1)$ be an $S^1$-space. Let $\\k0\\geq 1$ be the index of $(\\mathsf{M},\\mathsf{J})$, and\nassume that the Hilbert polynomial $\\Hi(z)$ is of positive degree $\\deg(\\Hi)>0$. If at least $\\deg(\\Hi)-3$ roots of $\\Hi(z)$, counted with\nmultiplicity, belong to $\\mathcal{C}_{\\k0}$, then all the roots of $\\Hi(z)$ belong to $\\mathcal{C}_{\\k0}$. In particular, if $n\\leq 3$, then all the roots of $\\Hi(z)$ belong to $\\mathcal{C}_{\\k0}$.\n\\end{corollary}\n\\begin{proof}\nLet $h$ be the number of roots, counted with multiplicity, which belong to $\\mathcal{C}_{\\k0}$; by assumption $h\\geq \\deg(\\Hi)-3$. \nSuppose that one of the remaining $\\deg(\\Hi)-h$ roots, $z_0\\in {\\mathbb{C}}$, does not belong to $\\mathcal{C}_{\\k0}$. Then, by Proposition \\ref{properties P} \\eqref{3a}, \nwe have that $z_1=-\\k0-z_0$ is also a root, and since $\\Hi(z)\\in \\mathbb{R}[z]$, the complex conjugates $z_2=\\overline{z_0}$ and $z_3=-\\k0-\\overline{z_0}$ are also roots.\nSince $z_0\\notin \\mathcal{C}_{\\k0}$, it follows that $z_i\\neq z_j$ for $i\\neq j$, and $z_i\\notin \\mathcal{C}_{\\k0}$ for $i=0,1,2,3$, implying that $\\Hi(z)$ has at least $h+4\\geq \\deg(\\Hi)+1$ roots, which is impossible\nsince we are assuming $\\Hi(z)$ to be non identically zero.\n\n\\end{proof}\n\n\nWe are now ready to prove Theorem \\ref{main theorem}.\n\\begin{proof}[Proof of Theorem \\ref{main theorem}]\nChoose $\\eta_0$ and $\\tau$ in $H^2(\\mathsf{M};{\\mathbb{Z}})$ such that $\\mathsf{c}_1=\\k0 \\eta_0 + \\tau$,\nwhere $\\tau\\in \\tor(H^2(\\mathsf{M};{\\mathbb{Z}}))$. By Lemma \\ref{line admissible}, both $\\eta_0$ and $\\tau$ admit equivariant extensions\n$\\eta_0^{S^1}$ and $\\tau^{S^1}$ in $H_{S^1}^2(\\mathsf{M};{\\mathbb{Z}})$. Since $\\tau^{S^1}(p)$ does not depend on $p\\in \\mathsf{M}^{S^1}$ (see \\eqref{trivial constant}),\nit follows that $\\mathsf{c}_{1}^{S^1}(p)=\\k0 \\eta_0^{S^1}(p)+c$ for all $p\\in \\mathsf{M}^{S^1}$, for some $c\\in {\\mathbb{Z}}$.\nThus by Proposition \\ref{eq index zero} we have that\n\\begin{equation}\\label{key equation}\n\\ind_{S^1}(\\e{\\,k\\eta_0^{S^1}})=0 \\quad \\mbox{for all}\\quad k=-1,-2,\\ldots,-\\k0+1\\,,\n\\end{equation}\nand by combining \\eqref{K commutes} and \\eqref{key equation} we have that\n$$\n\\Hi(k)=\\ind(\\e{\\,k\\eta_0})=r_K(\\ind_{S^1}(\\e{\\,k\\eta_0^{S^1}}))=0 \\quad \\mbox{for all}\\quad k=-1,-2,\\ldots,-\\k0+1\\,,\n$$\nand \\eqref{H=0 even} follows.\n\nIn order to prove \\eqref{bound k0}, observe that by \\eqref{H=0 even} the set of roots of $\\Hi(z)$ contains $C_0=\\{-1,-2,\\ldots,-\\k0+1\\}$, thus if $\\Hi(z)\\not\\equiv 0$ we must have that $|C_0|=\\k0-1\\leq \\deg(\\Hi)\\leq n$.\n\\end{proof}\nNote that by Proposition \\ref{properties P}, $\\Hi(z)$\nhas a different behaviour depending on whether $N_0=0$ or not. \n\\begin{corollary}\\label{bound on k0}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space, and assume that $\\mathsf{c}_1$ is not torsion. Let $\\k0\\geq 1$ be the index of $(\\M,\\J,S^1)$ and\n$\\Hi(z)$ the Hilbert polynomial. Let\n$N_0$ be the number of fixed points with $0$ negative weights.\nThen:\n\\begin{itemize}\n \\item[({\\bf i})] If $N_0\\neq 0\\;\\;\\;$ then $\\;\\;\\;1\\leq \\k0\\leq \\deg(\\Hi)+1\\leq n+1$;\n \\item[({\\bf ii})] If $N_0=0\\;\\;\\;$ then either $\\;\\;\\deg(\\Hi)>0$ and $1\\leq \\k0\\leq \\deg(\\Hi)-1\\leq n-1$, or\n $\\Hi(z)\\equiv 0\n $, the latter being equivalent to $\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=0 \\quad \\mbox{for every}\\quad h=0,\\ldots,n\\,$.\n\\end{itemize}\n\\end{corollary}\n\\begin{proof}\nIf $\\k0=1$, the inequalities in ({\\bf i}) clearly hold. Assume $\\k0\\geq 2$.\nObserve that if $N_0\\neq 0$ then Proposition \\ref{properties P} \\eqref{1a} implies that $\\Hi(z)$ is not identically zero, and ({\\bf i}) follows from \\eqref{bound k0}. \n\nSuppose that $N_0=0$. Observe that in this case we must have\\footnote{Indeed, in Section \\ref{examples} it will be proved that $N_0=0$ implies $n\\geq 3$, see Prop.\\ \\ref{dim 4}.} $n\\geq 2$. Indeed, for $n=1$ it is impossible to have $N_0=0$, since by \\eqref{NiN} \nwe would have $N_0=N_1=0$, and hence $|\\mathsf{M}^{S^1}|=0$. \nBy Proposition \\ref{properties P} \\eqref{1a} and \\eqref{3a}, and by \\eqref{H=0 even}, we have that \nthe set of roots of \n$\\Hi(z)$ contains $C'_0=\\{0,-1,\\ldots,-\\k0\\}$. It follows that, if $\\Hi(z)$ is not identically zero, then $|C'_0|=\\k0+1\\leq \\deg(\\Hi)\\leq n$.\n\\end{proof}\nA consequence of Corollary \\ref{bound on k0} in the symplectic category is the following:\n\\begin{corollary}\\label{bound on k0 s}\nLet $(\\mathsf{M},\\omega)$ be a compact, connected symplectic manifold, and $\\k0$ the associated index. Then:\n\\begin{itemize}\n\\item[({\\bf i'})] If $(\\mathsf{M},\\omega)$ can be endowed with a Hamiltonian $S^1$-action with isolated fixed points, then $\\;\\;\\;1\\leq \\k0\\leq \\deg(\\Hi)+1\\leq n+1$;\n\\item[({\\bf ii'})] If $(\\mathsf{M},\\omega)$ can be endowed with a non-Hamiltonian $S^1$-action with isolated fixed points, then there are three possibilities:\n\\begin{itemize}\n\\item[(a)] $\\k0=0$, i.e.\\ $\\mathsf{c}_1$ is torsion;\n\\item[(b)] $\\k0>0$ and $\\Hi\\equiv 0$, the latter being equivalent to $\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=0 \\quad \\mbox{for every}\\quad h=0,\\ldots,n\\,$;\n\\item[(c)] $\\k0>0$, $\\deg(\\Hi)>0$ and $1\\leq \\k0\\leq \\deg(\\Hi)-1\\leq n-1$.\n\\end{itemize}\n\\end{itemize}\n\\end{corollary}\n\\begin{proof}\nIn order to prove ({\\bf i'}) it is sufficient to notice that, by Lemma \\ref{Lemma:c1 not torsion}, we must have $\\k0>0$. Then the claim follows from Lemma \\ref{N0 1} and Corollary \\ref{bound on k0} ({\\bf i}).\nThe only non trivial thing to prove in ({\\bf ii'}) is the upper bound on the index in (c). But this follows by combining Lemma \\ref{N0 1} and Corollary \\ref{bound on k0} ({\\bf ii}).\n\\end{proof}\nCorollary \\ref{minimal chern ham} follows from Corollary \\ref{bound on k0} ({\\bf i'}) and the discussion in Remark \\ref{mcn}.\n\\begin{rmk}\\label{rmk 1}\n(a') In the $6$-dimensional example $(\\widetilde{M},\\omega)$ constructed by Tolman \\cite{T3}, the image of $\\mathsf{c}_1^{S^1}(\\widetilde{\\mathsf{M}})$ under the restriction map $i^*\\colon H^2_{S^1}(\\widetilde{M};{\\mathbb{Z}})\\to H^2_{S^1}(\\widetilde{M}^{S^1};{\\mathbb{Z}})$\nis identically zero. Such restriction is zero when, for instance, $\\mathsf{c}_1$ is torsion in $H^2(\\mathsf{M};{\\mathbb{Z}})$ (see \\cite[Lemma 4.1]{GPS}). However, to the best of the author's knowledge,\nit is still not known whether\n$\\mathsf{c}_1(\\widetilde{M})$ is torsion. \n\\\\\n(b') Note that, under the hypothesis of ({\\bf ii'}), if $\\k0\\geq n$ then $\\Hi\\equiv 0$.\n\\end{rmk}\n\\begin{rmk}[{\\bf Comparison with Hattori's results}]\\label{hattori rmk}\nIn \\cite{Ha} Hattori analyses inequalities which are similar to those in Corollary \\ref{bound on k0}, provided that $(\\M,\\J,S^1)$ is an $S^1$-space endowed with a suitable quasi-ample line bundle, defined as follows.\nAn equivariant line bundle $\\mathbb{L}^{S^1}$ is \\emph{fine} if the restrictions of $\\mathbb{L}^{S^1}$ at the fixed points are mutually distinct\n$S^1$-modules, i.e.\\; if $\\mathbb{L}^{S^1}(p_i)= t^{a_i}\\neq t^{a_j}=\\mathbb{L}^{S^1}(p_j)$\nfor every $p_i\\neq p_j $ in $\\mathsf{M}^{S^1}$.\nIt is \\emph{quasi-ample} if it is fine and its first (non equivariant) Chern class satisfies $ \\mathsf{c}_1(\\mathbb{L}^{S^1})^n[\\mathsf{M}]\\neq 0$.\nIn \\cite[Theorem 5.1]{Ha} the author proves that if $(\\M,\\J,S^1)$ possesses a quasi ample line bundle $\\mathbb{L}^{S^1}$,\nand its first (non equivariant) Chern class satisfies $\\mathsf{c}_1=k\\, \\mathsf{c}_1(\\mathbb{L}^{S^1})$ for some\n$k\\in {\\mathbb{Z}}_{>0}$, then $k\\leq n+1 \\leq \\chi(\\mathsf{M})$. \nThus, if the equivariant line bundle $\\eta_0^{S^1}$ defined in the proof of Theorem \\ref{main theorem} is quasi-ample, Hattori's results\nimply that $\\k0\\leq n+1\\leq \\chi(\\mathsf{M})$. \nObserve that in Corollary \\ref{bound on k0} ({\\bf i}), we do not require the existence of a quasi-ample line bundle; we assume instead $N_0\\neq 0$.\nWe also remark that if $\\mathsf{c}_1^n[\\mathsf{M}]=0$ then $\\eta_0^{S^1}$ cannot be quasi-ample; on the other hand, if $ \\mathsf{c}_1^n[\\mathsf{M}]=0$ and $N_0\\neq 0$, Corollary \\ref{bound on k0} ({\\bf i}) gives a better upper bound on $\\k0$, since the vanishing of $ \\mathsf{c}_1^n[\\mathsf{M}]$ implies that $\\deg(\\Hi)\\leq n-2$, thus giving $\\k0\\leq n-1$ (see Remark \\ref{c1n=0}).\n\\end{rmk}\n\n\\begin{rmk}\\label{c1n=0}\nFrom \\eqref{ah}, Proposition \\ref{properties P} \\eqref{4a} and Corollary \\ref{bound on k0} it follows that if $\\k0\\geq 1$:\n\\begin{itemize}\n\\item If $ \\mathsf{c}_1^n[\\mathsf{M}]=0$ and $N_0\\neq 0$, then $\\deg(\\Hi(z))\\leq n-2$ and $ \\k0\\leq n-1$;\\\\\n\\item If $\\mathsf{c}_1^n[\\mathsf{M}]=0$ and $N_0=0$, then $\\k0\\leq n-3$ or $\\Hi(z)\\equiv 0$.\n\\end{itemize}\nSimilarly,\n\\begin{itemize}\n\\item If $\\mathsf{c}_1^n[\\mathsf{M}]=\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=0$ and $N_0\\neq 0$, then $\\deg(\\Hi(z))\\leq n-4$ and $ \\k0\\leq n-3$;\\\\\n\\item If $\\mathsf{c}_1^n[\\mathsf{M}]=\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=0$ and $N_0=0$, then $\\k0\\leq n-5$ or $\\Hi(z)\\equiv 0$.\n\\end{itemize}\n\\end{rmk}\n\n\\begin{rmk}\\label{nec non Ham}\nObserve that by Corollary \\ref{bound on k0 s} ({\\bf ii'}) and \\eqref{ah} it follows that if $(\\mathsf{M},\\omega)$ supports a non-Hamiltonian action and $\\k0\\geq n$, then\n$\\mathsf{c}_1^n[\\mathsf{M}]=\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=0$. In Theorem \\ref{nHam-char} we strengthen this fact and\n prove that for $(\\mathsf{M},\\omega,S^1)$ with $\\k0\\geq n$, the vanishing of one of these Chern numbers is indeed equivalent to having a non-Hamiltonian\naction. Moreover, if $\\k0=n-2$ or $\\k0=n-1$, then a suitable linear combination of those Chern number is zero if and only if the action is non-Hamiltonian. \n\\end{rmk} \nAs we have already observed before (Lemma \\ref{Lemma:c1 not torsion}), a compact symplectic manifold with $\\mathsf{c}_1$ torsion cannot support any Hamiltonian circle action.\nIf $\\mathsf{c}_1$ is not torsion, a criterion to conclude the same is given by the following\n\\begin{corollary}\\label{cor non ham 2}\nLet $(\\mathsf{M},\\omega)$ be a compact, connected symplectic manifold of dimension $2n$ with index $\\k0>0$.\nIf \n$$\n\\mathsf{c}_1^h\\,T_{n-h}[\\mathsf{M}]=0 \\quad \\mbox{for all}\\quad h\\geq 2\\k0-n+2\\Big\\lfloor\\frac{n-\\k0}{2}\\Big\\rfloor\n$$\nthen the manifold cannot support any Hamiltonian circle action with isolated fixed points. \n\\end{corollary}\n\\begin{proof}\nFirst of all observe that $\\;2\\k0-n+2\\Big\\lfloor\\frac{n-\\k0}{2}\\Big\\rfloor\\geq \\k0-1$, and equality holds if and only if $n\\not\\equiv \\k0\\mod{2}$.\nBy definition of Hilbert polynomial (see \\eqref{ah}),\nhaving\n$\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=0$ for all $h\\geq0$ implies that $\\Hi\\equiv 0$. If $\\k0\\geq 2$, having $\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=0$ for all $h\\geq \\k0-1$ implies\nthat $\\deg(\\Hi)\\leq \\k0-2$.\nHowever, as a consequence of Theorem \\ref{main theorem}, $\\Hi(z)$ has at least $\\k0-1$ zeroes, so $\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=0$ for all $h\\geq\\k0-1$ implies that\n$\\Hi\\equiv 0$. \n\nBy Remark \\ref{H Ham}, the Hilbert polynomial \nof a symplectic manifold with a Hamiltonian $S^1$-action and isolated fixed points can never be identically zero, and the corollary follows from the discussion above for $n\\not\\equiv \\k0\\mod{2}$.\nIf $n\\equiv \\k0\\mod{2}$ then, by Proposition \\ref{properties P} \\eqref{4a} we have that $\\deg(\\Hi)\\leq \\k0-1$ implies $\\deg(\\Hi)\\leq \\k0-2$, and the conclusion holds in this case too.\n\\end{proof}\n\nAnother consequence of Theorem \\ref{main theorem} is the following\n\\begin{corollary}\\label{extra root -k02}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space, and assume its index $\\k0$ is non-zero. Let $\\Hi(z)$ be the Hilbert polynomial. \n\\begin{equation}\\label{extra root}\n \\mbox{If} \\quad n\\equiv\\k0\\mod 2\\quad \\mbox{then} \\;\\;\\;\\Hi\\Big(-\\frac{\\k0}{2}\\Big)=0\\,.\n \\end{equation}\nMoreover, if $\\Hi(z)\\not\\equiv 0$ and $n\\equiv \\k0\\equiv 0\\mod 2$, then the multiplicity of the root $-\\frac{\\k0}{2}$ is at least $2$.\n\\end{corollary}\n\\begin{proof}\nObserve that, if $\\k0\\geq 2$, by \\eqref{H=0 even} we have that $\\widetilde{\\Hi}(z)=\\displaystyle\\frac{\\Hi(z)}{\\prod_{j=1}^{\\k0-1}(z+j)}$\nis a polynomial. The same conclusion follows if $\\k0=1$ by setting the empty product to be $1$. Hence\n by Proposition \\ref{properties P} \\eqref{3a} we have that for all $\\k0\\geq 1$\n\\begin{equation}\\label{H0 tilde}\n\\widetilde{\\Hi}(-\\k0-z)=\\frac{\\Hi(-\\k0-z)}{\\prod_{j=1}^{\\k0-1}(-\\k0-z+j)}=\\frac{(-1)^n\\Hi(z)}{(-1)^{\\k0-1}\\prod_{j=1}^{\\k0-1}(z+j)}=(-1)^{n-\\k0+1}\\widetilde{\\Hi}(z)\\;.\n\\end{equation}\nHence if $n\\equiv \\k0\\mod 2$, from \\eqref{H0 tilde} it follows that $\\widetilde{\\Hi}(-\\frac{\\k0}{2})=0$, thus proving \\eqref{extra root}.\nFinally, if $\\k0$ is even, then $-\\frac{\\k0}{2}\\in \\{-1,\\ldots,-\\k0+1\\}\\subset {\\mathbb{Z}}$, hence it is a root of both $\\prod_{j=1}^{\\k0-1}(z+j)$ and $\\widetilde{\\Hi}(z)$.\n\\end{proof}\n\nFrom Theorem \\ref{main theorem} we also have the following refinement of Corollary \\ref{property roots}, which concerns the position of the roots of $\\Hi(z)$.\n\\begin{corollary}\\label{position roots}\nLet $(\\M,\\J,S^1)$, $\\k0$ and $\\Hi(z)$ be as in Theorem \\ref{main theorem}, and assume that $\\deg(\\Hi)>0$. If $\\k0\\geq n-2$ then all the roots of $\\Hi(z)$ belong to $\\mathcal{C}_{\\k0}$. \n\\end{corollary}\n\nThe next corollary gives useful equations in the Chern numbers\ndepending on the index $\\k0$ and the parity of $n-\\k0$.\n\\begin{corollary}[{\\bf Equations in the Chern numbers}]\\label{cor equations chern numbers}\nLet $(\\M,\\J,S^1)$ be as in Theorem \\ref{main theorem}. Then \n\\begin{equation}\\label{equations chern numbers}\n\\sum_{h=0}^n \\frac{1}{h!}\\left( \\frac{k}{\\k0}\\right)^h \\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=0\\quad \\;\\;\\;\\;\\mbox{for all}\\;\\; k \\in \\{-1,-2,\\ldots,-\\k0+1\\}\\,.\n\\end{equation}\nMoreover, if $n\\equiv \\k0\\mod 2$ then\n\\begin{equation}\\label{k02}\n\\sum_{h=0}^n \\frac{(-1)^h}{2^h h!}\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=0\\;,\n\\end{equation}\nand if $n\\equiv \\k0 \\equiv 0\\mod 2$ then\n\\begin{equation}\\label{k02 2}\n\\sum_{h=1}^n \\frac{(-1)^{h-1}}{2^{h-1} (h-1)!}\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=0\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\nIt is sufficient to notice that \\eqref{k02 2} is equivalent to having $\\Hi'(-\\frac{\\k0}{2})=0$, and the proof of Corollary \\ref{cor equations chern numbers} is a direct consequence of Theorem \\ref{main theorem}, Corollary \\ref{extra root -k02} and\nthe definition of Hilbert polynomial\n\\eqref{Hilbert pol}.\n\\end{proof}\nThus the cases in which we can derive more restrictions on the Chern numbers are when $\\k0$ is ``large'' (see Section \\ref{sec: values k0}). \n\\\\$\\;$\n\nBefore proceeding with the analysis of $\\Hi(z)$ for different values of $\\k0$, in the next subsection we study the properties of\nthe generating function of the sequence $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$.\n\n\n\\subsection{The generating function associated to the Hilbert polynomial}\\label{sec: generating fct}\n\nWe recall that the \\emph{generating function} of a sequence $\\{b_k\\}_{k\\in \\mathbb{N}}\\subset \\mathbb{R}$ is the formal power series\n$$\nP(t)=\\sum_{k\\geq 0} b_k t^k\\,.\n$$\n\n\nThe following result is due to Popoviciu \\cite{Po} (see also \n\\cite[Corollary 4.7]{St}).\n\\begin{prop}[Popoviciu]\\label{Pop}\nLet $H(z)$ be a polynomial of degree $m$ and $P(t)$ the generating function of the sequence $\\{H(k)\\}_{k\\in \\mathbb{N}}$. Then \n\\begin{equation}\\label{sym P}\nP(t^{-1})=(-1)^{m+1}t^{k_0}P(t) \n \\end{equation}\n for some $k_0\\in {\\mathbb{Z}}$ if and only if $k_0\\geq 1$, \n \\begin{equation}\\label{zeros H}\n H(-1)=H(-2)=\\cdots = H(-k_0+1)=0\n \\end{equation}\n and \n \\begin{equation}\\label{symmetry}\n H(k)=(-1)^m H(-k_0-k)\\quad \\mbox{for every}\\quad k\\in {\\mathbb{Z}}\\,.\n \\end{equation}\n\\end{prop}\n\nAs a consequence of the properties satisfied by $\\Hi(z)$, we have the following\n\n\\begin{prop}\\label{gen fct hilbert}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space and assume its index is non-zero. Let $\\Hi(z)$ be the associated Hilbert polynomial of degree $\\deg(\\Hi)=m$. Let $N_0$\nbe the number of fixed points with $0$ negative weights.\nThen the generating function $\\Gen(t)$ of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$ \nis given by\n\\begin{equation}\\label{gener fct}\n\\Gen(t)=\\frac{\\U(t)}{(1-t)^{m+1}} \n\\end{equation}\nwhere $\\U(t)$ is a polynomial in $\\mathbb{R}[t]$ such that $\\U(0)=N_0$, with \n\\begin{equation}\\label{prop Gen1}\n\\Gen(t^{-1})=(-1)^{m+1}t^{\\k0} \\Gen(t)\n\\end{equation}\nand \n\\begin{equation}\\label{prop U}\n \\U(t^{-1})=t^{\\k0-m-1}\\U(t)\\,.\n\\end{equation}\nMoreover, if $\\Hi(z)\\not\\equiv 0$, then\n\\begin{equation}\\label{degree U}\n\\frac{m+1-\\k0}{2}\\leq \\deg(\\U)\\leq m+1-\\k0\\,,\n\\end{equation}\nand $\\deg(\\U)=m+1-\\k0$ if and only if $N_0\\neq 0$.\nHere $\\deg(\\U)$ denotes the degree of $\\U$.\n\\end{prop}\nThus, by Lemma \\ref{N0 1}, if $(\\mathsf{M},\\omega)$ is a compact symplectic manifold and the $S^1$-action is Hamiltonian, then \nthe polynomial $\\U(t)$ is of degree $m+1-\\k0$.\n\\begin{proof}\nIt is well known that the generating function of a sequence $\\{H(k)\\}_{k\\in \\mathbb{N}}$, where $H\\in \\mathbb{R}[z]$ is a polynomial of degree $m$,\nis of the form given by \\eqref{gener fct}, where $\\U(t)\\in \\mathbb{R}[t]$ is a polynomial of degree at most equal to $m$. In order to prove that $\\U(0)=N_0$, observe that \n$$\\Gen(t)=\\frac{\\U(t)}{(1-t)^{m+1}}= \\U(t)\\sum_{k\\geq 0}\\binom{m+k}{m}t^k=\\U(0)+tQ(t)$$\nfor some formal power series $Q(t)\\in \\mathbb{R}[[t]]$. Thus $\\U(0)=\\Gen(0)=\\Hi(0)$, and by Proposition \\ref{properties P} \\eqref{1a} $\\Hi(0)=N_0$.\n\nAs for \\eqref{prop U}, observe that by Theorem \\ref{main theorem} \\eqref{H=0 even}, if $\\k0\\geq 2$ we have that \\eqref{zeros H} is satisfied for $k_0=\\k0$, the index of $(\\mathsf{M},\\mathsf{J})$.\nIf $\\k0=1$ \\eqref{zeros H} is trivially satisfied, since it is the empty condition. \nMoreover, by Proposition \\ref{properties P} \\eqref{3a} and \\eqref{4a}, we have that \\eqref{symmetry} is satisfied as well. Thus by Proposition \\ref{Pop} \nthe generating function $\\Gen(t)$ of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$ satisfies \\eqref{prop Gen1}, obtaining\n$$\n(-1)^{m+1}\\frac{t^{m+1}\\U(t^{-1})}{(1-t)^{m+1}}=\\Gen(t^{-1})=(-1)^{m+1}t^{\\k0}\\Gen(t)=(-1)^{m+1}\\frac{t^{\\k0}\\U(t)}{(1-t)^{m+1}}\n$$\nand \\eqref{prop U} follows.\n\nLet $e=\\deg(\\U)$ and $\\U(t)=\\alpha_0+\\alpha_1t+\\cdots +\\alpha_e t^e$. By \\eqref{prop U} we have that\n\\begin{equation}\\label{coeff U}\n\\alpha_et^{m+1-\\k0-e}+\\alpha_{e-1}t^{m+1-\\k0-e+1}+\\cdots + \\alpha_0t^{m+1-\\k0}=\\alpha_0+\\alpha_1t+\\cdots +\\alpha_e t^e\\,,\n\\end{equation}\nhence we must have $0\\leq m+1-\\k0-e\\leq e$, and \\eqref{degree U} follows.\nThe equality in \\eqref{coeff U} also implies that $\\alpha_0=\\U(0)=N_0\\neq 0$ if and only if $m+1-\\k0-e=0$.\n\\end{proof}\n\nWe recall that a polynomial of degree $e$, $U(t)=\\alpha_0+\\alpha_1t+\\cdots + \\alpha_e t^e$, is called \\emph{self-reciprocal} if \n\\begin{equation}\\label{self-rec}\nt^{e}U(t^{-1})=U(t)\\,.\n\\end{equation}\nSuch a polynomial is sometimes also referred to as a \\emph{palindromic}, since \\eqref{self-rec} is equivalent to saying that\nthe list of coefficients $\\alpha_0\\,\\alpha_1\\,\\cdots \\alpha_e$ is a palindrome, i.e.\\;$\\alpha_i=\\alpha_{e-i}$ for every $i$.\n\\begin{corollary}\\label{U palindrom}\nWith the same notation of Proposition \\ref{gen fct hilbert}, we have that:\n\\begin{itemize}\n\\item[({\\bf i})] $\\U(t)$ is divisible by $t^{m+1-\\k0-e}$, where\n$e=\\deg(\\U)$, and the polynomial $t^{e+\\k0-m-1}\\U(t)$ is self-reciprocal. \n\\item[({\\bf ii})] If $(\\mathsf{M},\\omega)$ is a symplectic manifold and the $S^1$-action is Hamiltonian,\nthen $\\U(t)$ is self-reciprocal. Moreover if $(\\mathsf{M},\\omega)$ is monotone with $\\mathsf{c}_1=\\k0 [\\omega]$, then \n$\\deg(\\U)=n+1-\\k0$.\n\\end{itemize}\n\\end{corollary}\n\\begin{proof}\nThe claims in ({\\bf i}) are a consequence of \\eqref{prop U} and \\eqref{coeff U}. \nIf $\\deg(\\U)=e=m+1-\\k0$, which by Proposition \\ref{gen fct hilbert} is equivalent to having $N_0\\neq 0$,\nwe obtain that $\\U(t)$ is self-reciprocal, and the first claim in ({\\bf ii}) follows from Lemma \\ref{N0 1}.\nThe second claim follows from observing that monotonicity implies $ \\mathsf{c}_1^n[\\mathsf{M}] \\neq 0$, hence $\\deg(\\Hi)=n$.\n\\end{proof}\n\\begin{rmk}\\label{num of cds}\nObserve that the polynomial $\\U(t)$ determines $\\Gen(t)$ which, in turns, determines $\\Hi(z)$. Thus the Hilbert polynomial, and hence\nall the combinations of Chern numbers $\\mathsf{c}_1^h T_{n-h}[\\mathsf{M}]$, for $h=0,\\ldots,n$, are completely determined by the coefficients of $\\U(t)$.\nMoreover, if $N_0$ is given, the coefficient of degree zero in $\\U(t)$ is known, since by Proposition \\ref{gen fct hilbert} $\\U(0)=N_0$.\nIn conclusion, from Corollary \\ref{U palindrom} it follows that the number of coefficients of $\\U(t)$ to determine is at most\nequal to $\\floor*{\\displaystyle\\frac{m-\\k0-1}{2}}+1$. This explains why\nthe number of conditions that completely determine the Hilbert polynomial (and hence the combinations of Chern numbers $\\mathsf{c}_1^h T_{n-h}[\\mathsf{M}]$)\nis the same when $\\k0=n+1-2k$ and $\\k0=n-2k$, for every $k\\in {\\mathbb{Z}}$ such that $0\\leq k\\leq \\frac{n-1}{2}$.\n\\end{rmk}\nIn the beautiful note \\cite{RV}, the author analysis the position of the roots of $\\Hi(z)$ in terms of those of $\\U(t)$,\n deriving the following\n\\begin{thm}[Rodriguez-Villegas \\cite{RV}]\\label{RV theorem}\nLet the notation be as in Proposition \\ref{gen fct hilbert}, and $\\mathcal{T}_k$ as in Definition \\ref{def: RVGo}. \nAssume that $\\Hi(z)\\not\\equiv 0$ and that all the roots of $\\U(t)$ are on the unit circle. Then $\\Hi(z)$ belongs to $\\mathcal{T}_{\\k0}$.\n\\end{thm}\nIn the next section we analyse the different expressions of $\\U(t)$ for $\\k0\\in \\{n-2,n-1,n,n+1\\}$.\nAs a consequence, we prove that if $\\k0=n$ or $\\k0=n+1$, then $\\Hi(z)$ always belongs to $\\mathcal{T}_{\\k0}$ (unless $\\Hi(z)\\equiv 0$).\nIf $\\k0=n-2$ or $n-1$, we\nderive necessary and sufficient conditions on the Chern numbers that \nensure $\\Hi(z)$ to be in $\\mathcal{T}_{\\k0}$, or more in general that ensure its roots to be on the canonical strip $\\mathcal{S}_{\\k0}$ (see Corollaries \\ref{pos roots n-1} and \\ref{pos roots n-2}). As a byproduct, we prove that when $N_0=1$ and $n$ is big enough, then $\\Hi(z)$ belongs to $\\mathcal{T}_{\\k0}$ \\emph{if and only if}\nthe roots of $\\U(t)$ are on the unit circle (see Corollaries \\ref{RV1} and \\ref{RV2}).\n\n\\subsection{Connection with Ehrhart polynomials}\\label{connections ehrhart}\nSome of the results in Section \\ref{equations chern} can be regarded as a generalisation of what is already known for the Ehrhart polynomial of a reflexive polytope.\nThe link between Hilbert polynomials of $S^1$-spaces and Ehrhart polynomials of reflexive polytopes is given by monotone symplectic toric manifolds.\n\nSuppose that $(\\mathsf{M},\\omega)$ is a compact symplectic manifold of dimension $2n$, and that the $S^1$-action extends to a toric action, i.e.\\\n$S^1$ is a circle subgroup in an $n$-dimensional torus $\\mathbb{T}^n$ which is acting effectively on $(\\mathsf{M},\\omega)$ \n with moment map $\\Psi\\colon (\\mathsf{M},\\omega) \\to Lie(\\mathbb{T}^n)^*$. We identify $Lie(\\mathbb{T}^n)^*$ with $\\mathbb{R}^n$, and let the dual lattice of $\\mathbb{T}^n$ be ${\\mathbb{Z}}^n$. \n By the Atiyah \\cite{At1} and Guillemin-Sternberg \\cite{GS82} convexity theorem, we know\n that $\\Psi(\\mathsf{M})=: \\Delta$ is a convex polytope, more precisely it is the convex hull \n of its vertices, which coincide with the images of the fixed points of the $\\mathbb{T}^n$ action. \n Suppose that $(\\mathsf{M},\\omega)$ is also \\emph{monotone} and rescale the symplectic form so that $\\mathsf{c}_1=\\k0 [\\omega]$ (so $[\\omega]$\n is primitive in $H^2(\\mathsf{M};{\\mathbb{Z}})$, which is torsion free in this case). Choose the moment map $\\Psi$ so that all the vertices of $\\Delta$ belong to the lattice ${\\mathbb{Z}}^n$: we call such polytope $\\Delta$ \\emph{primitive} and \\emph{integral}. \n As a consequence of a result of Danilov \\cite{Da}, we have that \n \\emph{the Hilbert polynomial $\\Hi(z)$ of $(\\mathsf{M},\\omega)$ coincides with the Ehrhart polynomial $i_{\\Delta}(z)$ of $\\Delta$}.\n Moreover, it is well-known that there exists a (unique) $k\\in {\\mathbb{Z}}_{>0}$ such that the dilated polytope \n $\\Delta'=k \\Delta$, suitably translated by an integer vector, is \\emph{reflexive}\\footnote{An integral polytope $\\mathcal{P}\\subset \\mathbb{R}^n$ of dimension $n$ is reflexive if it contains the origin in its interior, and its dual polytope $\\mathcal{P}^*=\\{\\mathbf{x}\\in \\mathbb{R}^n\\mid \\mathbf{x}\\cdot \\mathbf{y}\\geq -1\\mbox{ for all }\\mathbf{y}\\in \\mathcal{P}\\}$\n is also integral.}. By a result of Hibi \\cite{Hibi}, this is equivalent to saying that the Ehrhart polynomial $i_{\\Delta'}(z)$ and its associated generating function $P_\\Delta'(t)=\\frac{U(t)}{(1-t)^{n+1}}$\n satisfy \n \\begin{equation}\\label{deltas}\n i_{\\Delta'}(z)=(-1)^n i_{\\Delta'}(-1-z)\\quad\\quad\\mbox{and}\\quad\\quad P_{\\Delta'}(t^{-1})=(-1)^{n+1}t \\,P_{\\Delta'}(t).\n \\end{equation}\nThe following gives a combinatorial characterisation of the index $\\k0$ of $(\\mathsf{M},\\omega)$ (which, by Remark \\ref{mcn}, coincides with the minimal Chern number): \n\\begin{lemma}\\label{k0 equivalence}\nLet $(\\mathsf{M},\\omega,\\mathbb{T},\\Psi)$ be a monotone symplectic toric manifold, with symplectic form satisfying $\\mathsf{c}_1=\\k0[\\omega]$. Consider the primitive integral moment polytope image $\\Delta$.\nThen the index $\\k0$ is the unique integer so that $\\Delta'=\\k0 \\Delta$ is reflexive.\n\\end{lemma}\n\\begin{proof}\nFirst of all observe that, from $\\Delta'=k\\Delta$ we have $i_{\\Delta}(z)=i_{\\Delta'}\\big(\\frac{z}{k}\\big)$ for every $z\\in {\\mathbb{C}}$.\nMoreover, as mentioned before, $\\Hi(z)=i_\\Delta(z)$.\n So from \\eqref{deltas} we have that \n$$\n\\Hi(z)=i_{\\Delta}(z)=i_{\\Delta'}\\Big(\\frac{z}{k}\\Big)=(-1)^n i_{\\Delta'}\\Big(-1-\\frac{z}{k}\\Big)=(-1)^n i_\\Delta(-k-z)=(-1)^n \\Hi(-k-z)\\,,\n$$\nfor every $z\\in {\\mathbb{C}}$. By Remark \\ref{H Ham}, $\\Hi(z)$ is a nonzero polynomial, so Proposition \\ref{properties P} \\eqref{3a} implies that $\\k0=k$. \n\\end{proof}\nIt is in this sense that we can regard the symmetry property of $\\Hi(z)$ (i.e.\\ Proposition \\ref{properties P} \\eqref{3a}) and the results in Proposition \\ref{gen fct hilbert} as a generalisation of \n\\eqref{deltas}.\n\n\\section{Computation of $\\Hi(z)$ and Chern numbers for some values of $\\k0$}\\label{sec: values k0}\nIn this section, we compute explicitly the Hilbert polynomial $\\Hi(z)$ and its associated generating \nfunction for $\\k0\\geq n-2$ and $\\k0\\neq 0$, deriving more properties of the Chern numbers of $(\\M,\\J,S^1)$. \n\nLet $\\sigma_j(x_1,\\ldots,x_n)$ be the $j$-th elementary symmetric polynomials \nin $x_1\\ldots,x_n$, for $j=0,\\ldots,n$, and let $\\left[ \\begin{array}{c} n \\\\ k \\end{array} \\right]$ be the \\emph{unsigned Stirling numbers of the\nfirst kind}, where $k,n \\in \\mathbb{N}$ and $1\\leq k\\leq n$, satisfying\n\\begin{equation}\\label{stirling}\n(x)^{(n)}=x(x+1)\\cdots (x+n-1)=\\sum_{k=0}^n \\left[ \\begin{array}{c} n \\\\ k \\end{array} \\right]x^k\\,,\n\\end{equation}\nwhere $(x)^{(n)}$ is the rising factorial. \nThus we have the relation:\n\\begin{equation}\\label{stirling permutation}\n\\sigma_k(1,2,\\ldots,n)=\\left[ \\begin{array}{c} n+1 \\\\ \\\\ n-k+1 \\end{array} \\right]\\,,\n\\end{equation}\nand the following well-known identities:\n\\begin{align}\n& \\sigma_0(1,2,\\ldots,n)=\\left[ \\begin{array}{c} n+1 \\\\ n+1 \\end{array} \\right]=1\\nonumber\\\\\n\\label{sigma1}& \\sigma_1(1,2,\\ldots,n)=\\left[ \\begin{array}{c} n+1 \\\\ n \\end{array} \\right] = \\binom{n+1}{2}\\\\\n\\label{sigma2} & \\sigma_2(1,2,\\ldots,n)= \\left[ \\begin{array}{c} n+1 \\\\ n-1 \\end{array} \\right] =\\frac{1}{4}(3n+2)\\binom{n+1}{3}=\\frac{(3n+2)(n+1)n(n-1)}{24}\n\\end{align}\nObserve that by Corollary \\ref{bound on k0}, if $\\k0>n+1$ then $\\Hi(z)\\equiv 0$ and $ \\mathsf{c}_1^h T_{n-h}[\\mathsf{M}]=0$ for every $h=0,\\ldots,n$. So in the rest of the section\nwe will focus on the cases in which $0<\\k0\\leq n+1$.\n\nBefore beginning, we remind the reader that the Hilbert polynomial of ${\\mathbb{C}} P^n$ is given by $\\frac{\\prod_{j=1}^n(z+j)}{n!}$.\n\\begin{prop}[$\\k0=\\mathbf{n+1}$]\\label{cor n+1}\n\nLet $(\\M,\\J,S^1)$ be and $S^1$-space with index $\\k0=n+1$. Let $N_0$ be the number of fixed points with $0$ negative weights.\n Then \n\\begin{equation}\\label{H k0=n+1}\n\\Hi(z)= \\frac{N_0}{n!}\\prod_{j=1}^n(z+j)=N_0 \\Hi_{{\\mathbb{C}} P^n}(z)\\,,\n\\end{equation}\nwhere $\\Hi_{{\\mathbb{C}} P^n}(z)$ is the Hilbert polynomial of ${\\mathbb{C}} P^n$, and for every $h=0,\\ldots,n$ we have \n\\begin{equation}\\label{n+1 precise}\n\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=N_0\\frac{h!(n+1)^h}{n!}\\left[ \\begin{array}{c} n+1 \\\\ h+1 \\end{array} \\right]=N_0\\, \\mathsf{c}_1^h\\,T_{n-h}[{\\mathbb{C}} P^n].\n\\end{equation}\nIn particular \n\\begin{equation}\\label{c1 n+1}\n\\mathsf{c}_1^n[\\mathsf{M}]=N_0 (n+1)^n\\, \n\\end{equation}\nand\n\\begin{equation}\\label{c1c2}\n \\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=N_0\\frac{n(n+1)^{n-1}}{2}\\,.\n\\end{equation}\nMoreover, the generating function of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$ is given by\n\\begin{equation}\\label{gen fct n+1}\n\\Gen(t)=N_0\\frac{1}{(1-t)^{n+1}}\n\\end{equation}\n\\end{prop}\n\n\\begin{rmk}\\label{pos roots n+1}\nFrom \\eqref{gen fct n+1} and Proposition \\ref{gen fct hilbert} we have that in this case $\\U(t)=N_0$, and\nif $N_0\\neq 0$, the zeros of $\\Hi(z)$ coincide with the integers greater than $-\\k0=-(n+1)$ and smaller than $0$,\nthus in particular $\\Hi(z)$ belongs to $\\mathcal{T}_{n+1}$, and hence all its roots are on the canonical strip $\\mathcal{S}_{n+1}$ (see Theorem \\ref{RV theorem}).\n\\end{rmk}\n\n\n\\begin{proof}[Proof of Proposition \\ref{cor n+1}]\nIf $N_0=0$ then all the claims in Proposition \\ref{cor n+1} follow from Corollary \\ref{bound on k0} ({\\bf ii}). \nSuppose that $N_0\\neq 0$. By Proposition \\ref{properties P} \\eqref{1a}, $\\Hi(z)$ is a nonzero polynomial which, by Theorem \\ref{main theorem}\n\\eqref{H=0 even}, has roots $-1,-2,\\ldots,-n$ (note that in this case $\\k0\\geq 2$). Thus $\\Hi(z)=\\alpha \\prod_{j=1}^n(z+j)$.\nIn order to find $\\alpha$ we can use Proposition \\ref{properties P} \\eqref{1a}, obtaining $\\Hi(0)=\\alpha\\, n!=N_0$, and \\eqref{H k0=n+1} follows.\nFor $h=0,\\ldots,n$, the term of degree $h$ on the right hand side of \\eqref{H k0=n+1} is given by $\\frac{N_0}{n!}\\sigma_{n-h}(1,2,\\ldots,n)=\\frac{N_0}{n!}\\left[ \\begin{array}{c} n+1 \\\\ h+1 \\end{array} \\right]$.\nOn the other hand, the term of degree $h$ on the left hand side of \\eqref{H k0=n+1} can by computed by using \\eqref{Hilbert pol}, obtaining\n$\\displaystyle\\frac{\\mathsf{c}_1^h\\,T_{n-h}}{(n+1)^h\\,h!}[\\mathsf{M}]$; this completes the proof of \\eqref{n+1 precise}.\nIn order to prove \\eqref{c1 n+1} it is sufficient to consider \\eqref{n+1 precise} with $h=n$ (or $h=n-1$).\nBy taking $h=n-2$, from \\eqref{n+1 precise} we have \n\\begin{equation}\\label{sigma 2}\n\\mathsf{c}_1^{n-2}\\left(\\frac{\\mathsf{c}_1^2+\\mathsf{c}_2}{12}\\right)[\\mathsf{M}]=N_0 \\frac{(n-2)!(n+1)^{n-2}}{n!}\\left[ \\begin{array}{c} n+1 \\\\ n-1 \\end{array} \\right]\\,,\n\\end{equation} \nwhich, combined with \\eqref{c1 n+1} and \\eqref{sigma2} proves \\eqref{c1c2}.\nIn order to prove \\eqref{gen fct n+1}, observe that, by the above discussion, if $\\k0=n+1$ then $\\Hi(z)$ is either of degree $n$,\nwhich happens exactly if $N_0\\neq 0$, or it is identically zero. In the first case, by Proposition \\ref{gen fct hilbert}, $\\U(t)$ is of degree\nzero and $\\U(0)=N_0$, implying \\eqref{gen fct n+1}.\n\\end{proof}\n\n\nAs we will see in the next proposition, the case $\\k0=n$ is similar to $\\k0=n+1$.\nWe recall that the Hilbert polynomial of $Q$, the hyperquadric in ${\\mathbb{C}} P^{n+1}$, is given by $\\frac{2}{n!}\\Big(z+\\frac{n}{2}\\Big)\\prod_{j=1}^{n-1}(z+j)$.\n\\begin{prop}[$\\k0=\\mathbf{n}$]\\label{cor n}\n\nLet $(\\M,\\J,S^1)$ be and $S^1$-space with index $\\k0=n$. Let $N_0$ be the number of fixed points with $0$ negative weights.\nThen $n\\geq 2$ and\n\\begin{equation}\\label{H k0=n}\n\\Hi(z)= \\frac{2\\,N_0}{n!}\\Big(z+\\frac{n}{2}\\Big)\\prod_{j=1}^{n-1}(z+j)=N_0 \\Hi_Q(z)\\,, \n\\end{equation}\nwhere $\\Hi_Q(z)$ is the Hilbert polynomial of $Q$, the hyperquadric in ${\\mathbb{C}} P^{n+1}$. \nThus for every $h=0,\\ldots,n$ we have \n\\begin{equation}\\label{n precise}\n\\mathsf{c}_1^h\\, T_{n-h}[\\mathsf{M}]=N_0\\frac{2\\,h!\\,n^h}{n!}\\Big(\\left[ \\begin{array}{c} n \\\\ h \\end{array} \\right]+\\frac{n}{2}\\left[ \\begin{array}{c} n \\\\ h+1 \\end{array} \\right]\\Big)\\, = N_0 \\, \\mathsf{c}_1^h\\,T_{n-h}[Q].\n\\end{equation}\nIn particular \n\\begin{equation}\\label{c1 n}\n\\mathsf{c}_1^n[\\mathsf{M}]=N_0\\, 2 n^n\n\\end{equation}\nand\n\\begin{equation}\\label{c1c22}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=N_0\\,n^{n-2}(n^2-n+2)\\,.\n\\end{equation}\nMoreover, the generating function of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$ is given by\n\\begin{equation}\\label{gen fct n}\n\\Gen(t)=N_0\\frac{1+t}{(1-t)^{n+1}}\n\\end{equation}\n\\end{prop}\n\n\n\\begin{rmk}\\label{pos roots n}\nFrom \\eqref{gen fct n} and Proposition \\ref{gen fct hilbert} we have that in this case $\\U(t)=N_0(1+t)$. Thus,\nif $N_0\\neq 0$, the root of $\\U(t)$ is on the unit circle, and \nthe zeros of $\\Hi(z)$ coincide with the integers greater than $-\\k0=-n$ and smaller than $0$, together with $-\\frac{n}{2}$,\nthus in particular $\\Hi(z)$ belongs to $\\mathcal{T}_{n}$, and hence its roots are on the canonical strip $\\mathcal{S}_{n}$ (see Theorem \\ref{RV theorem}).\n\n\\end{rmk}\n\n\n\n\\begin{proof}[Proof of Proposition \\ref{cor n}]\nFirst of all, for $n=1$ observe that the only compact almost complex manifold supporting a circle action with discrete fixed point set is the\nsphere, since such surface must have positive Euler characteristic. In this case $\\k0=2$. So we must have $n\\geq 2$, and hence $\\k0\\geq 2$.\n\nThe proof of the rest is very similar to that of Proposition \\ref{cor n+1}, but we include it here for the sake of completeness.\nIf $N_0=0$ then all the claims in Proposition \\ref{cor n} follow from Corollary \\ref{bound on k0} ({\\bf ii}). \nSuppose that $N_0\\neq 0$. Then by Proposition \\ref{properties P} \\eqref{1a} we have that $\\Hi(z)\\not\\equiv 0$, and from\nTheorem \\ref{main theorem} \\eqref{H=0 even} and Corollary \\ref{extra root -k02} we have that \n$\\Hi(z)=\\beta(z+\\frac{n}{2})\\prod_{j=1}^{n-1}(z+j)$. \nIn order to determine $\\beta$ we can use \nProposition \\ref{properties P} \\eqref{1a}, obtaining \n$\\beta=\\frac{2\\,N_0}{n!}$, thus implying \\eqref{H k0=n}. The equations in \\eqref{n precise} follow easily from observing that\n$$\n\\sigma_{n-h}\\Big(1,2,\\ldots,n-1,\\frac{n}{2}\\Big)=\\sigma_{n-h}\\big(1,2,\\ldots,n-1\\big)+\\frac{n}{2}\\sigma_{n-h-1}(1,2,\\ldots,n-1)=\\left[ \\begin{array}{c} n \\\\ h \\end{array} \\right]+\\frac{n}{2}\\left[ \\begin{array}{c} n \\\\ h+1 \\end{array} \\right]\\,.\n$$\n\nIn order to prove \\eqref{c1 n} it is sufficient to consider \\eqref{n precise} with $h=n$ (or $h=n-1$).\nTo prove \\eqref{c1c22}, first of all observe that \n$$\n\\sigma_2(1,2,\\ldots,n-1,\\frac{n}{2})=\\sigma_2(1,2,\\ldots,n-1)+\\frac{n}{2}\\sigma_1(1,2,\\ldots,n-1)=\\frac{1}{24}n(n-1)(3n^2-n+2)\\,,\n$$\nwhere the last equality follows from \\eqref{sigma1} and \\eqref{sigma2}.\nThus if we take $h=n-2$ in \\eqref{n precise} we obtain\n\\begin{align*}\n\\mathsf{c}_1^{n-2}\\left(\\frac{\\mathsf{c}_1^2+\\mathsf{c}_2}{12}\\right)[\\mathsf{M}]& =N_0\\frac{2(n-2)!n^{n-2}}{n!}\\sigma_2(1,2,\\ldots,n-1,\\frac{n}{2})\\\\\n & = \\frac{N_0}{12}n^{n-2}(3n^2-n+2)\\,,\\\\\n\\end{align*}\nand the conclusion follows from \\eqref{c1 n}.\n\nIn order to prove \\eqref{gen fct n}, observe that, by the above discussion, if $\\k0=n$ then $\\Hi(z)$ is either of degree $n$,\nwhich happens exactly if $N_0\\neq 0$, or it is identically zero. In the first case, by Proposition \\ref{gen fct hilbert} and Corollary \\ref{U palindrom}, $\\U(t)$ is a self-reciprocal polynomial of degree\none and $\\U(0)=N_0$, thus implying \\eqref{gen fct n}.\n\n\\end{proof}\n\nFrom Propositions \\ref{cor n+1} and \\ref{cor n} we can see that the cases $\\k0=n+1$ and $\\k0=n$ are very similar, in the sense that\nthe Hilbert polynomial $\\Hi(z)$, as well as the combinations of Chern numbers $\\mathsf{c}_1^h\\,T_{n-h}[\\mathsf{M}]$, for $h=0,\\ldots,n$, and the generating\nfunction $\\Gen(t)$ of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$, are completely determined (see Remark \\ref{num of cds}).\n\\begin{rmk}\\label{liham}\nIn recent work Li \\cite{Li} proves that if the $2n$-dimensional manifold $\\mathsf{M}$ is symplectic, the $S^1$ action Hamiltonian and $\\chi(\\mathsf{M})=n+1$, then having $\\k0=n+1$ (resp.\\ $\\k0=n$)\nis equivalent to having the same total Chern class of ${\\mathbb{C}} P^n$ (resp.\\ of the Grassmannian of oriented planes in $\\mathbb{R}^{n+2}$ with $n$ odd) which, in turns, is equivalent to having the same integral \ncohomology ring of ${\\mathbb{C}} P^n$ (resp.\\ the Grassmannian). Thus in particular, under the above hypotheses, all the Chern numbers are `standard', i.e.\\ they agree with those of ${\\mathbb{C}} P^n$ (resp.\\ of the hyperquadric).\nThe assumption $\\chi(\\mathsf{M})=n+1$ is essential, since it implies the existence of a quasi-ample line bundle (in the sense specified in Remark \\ref{hattori rmk}) which in this case is given by the pre-quantization line bundle (see also \\cite[Proposition 7.5 (i)]{GoSa}). \n\\end{rmk}\n\n\nIn the following we analyse in details the cases $\\k0=n-1$ and $\\k0=n-2$.\nObserve that if $n=1$ the index $\\k0$ cannot be zero, since the only compact almost complex surface\nthat can be endowed with a compatible $S^1$-action with isolated fixed points is the sphere, for which $\\k0= 2$.\nSo in the next proposition it is not restrictive to assume $n\\geq 2$ for $\\k0=n-1$.\n\n\\begin{prop}[$\\k0=\\mathbf{n-1}$]\\label{k0=n-1}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space of dimension $2n\\geq 4$ with index $\\k0=n-1$.\n\\begin{itemize}\n\\item[(a)]\\label{n-1 a} If $N_0\\neq 0$ and $\\mathsf{c}_1^n[\\mathsf{M}]\\neq 0$ then \n\\begin{equation}\\label{H n-1 1}\n\\Hi(z)=\\frac{4\\,N_0}{(n-2)!\\big[(n-1)^2-4a\\big]}\\Big(z^2+(n-1)z+\\frac{(n-1)^2}{4}-a\\Big)\\prod_{j=1}^{n-2}(z+j)\\,,\n\\end{equation}\nwhere $a\\in \\mathbb{R}$ is not equal to $\\frac{(n-1)^2}{4}$. Moreover \n\\begin{equation}\\label{c1n n-1}\n\\mathsf{c}_1^n[\\mathsf{M}]=\\frac{4\\,N_0\\,n(n-1)^{n+1}}{(n-1)^2-4a}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{c1c222}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=\\frac{4N_0(n-1)^{n-2}}{\\big[(n-1)^2-4a\\big]}\\Big[3-12a-6n+\\frac{9}{2}n^2-2n^3+\\frac{n^4}{2}\\Big]\\,.\n\\end{equation}\n\\item[(b)]\\label{n-1 b} If $N_0\\neq 0$ and $\\mathsf{c}_1^n[\\mathsf{M}]= 0$ then\n\\begin{equation}\\label{H n-1 2}\n\\Hi(z)=\\frac{N_0}{(n-2)!}\\prod_{j=1}^{n-2}(z+j)\\,,\n\\end{equation}\nand \n\\begin{equation}\\label{c1c2 n-1}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=12\\, N_0 (n-1)^{n-2}\\,.\n\\end{equation}\n\\end{itemize}\n\nMoreover, in \\emph{(a)} and \\emph{(b)}, the generating function of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$ is given by \n\\begin{equation}\\label{gen fct n-1 1}\n\\Gen(t)=N_0\\frac{1+b\\,t+\\,t^2}{(1-t)^{n+1}} \n\\end{equation}\nwhere $b\\in \\mathbb{Q}$ is such that $b\\,N_0\\in {\\mathbb{Z}}$ and\n\\begin{equation}\\label{c1n n-1 b}\n\\mathsf{c}_1^n[\\mathsf{M}]=N_0(b+2)(n-1)^n\\,,\n\\end{equation}\n\\begin{equation}\\label{c1c2 n-1 b}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=N_0(n-1)^{n-2}\\big[12+\\frac{(b+2)n(n-3)}{2}\\big]\\,.\n\\end{equation}\n(Thus case \\emph{(b)} corresponds to taking $b=-2$.)\n\n\\begin{itemize} \n\\item[(c)]\\label{n-1 c}\nIf $N_0=0$ then \n\\begin{equation}\\label{H n-1 3}\n\\Hi(z)=\\gamma \\prod_{j=0}^{n-1}(z+j)\\,,\n\\end{equation}\nwhere $\\gamma=\\frac{1}{(n-1)^n n!} \\mathsf{c}_1^n[\\mathsf{M}]$.\n\\end{itemize}\nMoreover, the generating function of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$ is given by \n\\begin{equation}\\label{gen fct n-1 3}\n\\Gen(t)=\\gamma\\,n!\\frac{t}{(1-t)^{n+1}}\\,. \n\\end{equation}\n\\end{prop}\n\\begin{rmk}\\label{integrality a}\nObserve that the value of $a$ in \\eqref{c1n n-1} cannot be arbitrary, since the following fraction\n$$\n\\frac{4N_0\\,n(n-1)}{(n-1)^2-4a}\n$$\nmust be an integer. This follows from the fact that, modulo torsion, $\\mathsf{c}_1=(n-1)\\eta_0$ for some $\\eta_0\\in H^2(\\mathsf{M};{\\mathbb{Z}})$,\nand hence $\\frac{\\mathsf{c}_1^n[\\mathsf{M}]}{(n-1)^n}$ must be an integer.\n\\end{rmk}\n\n\nThe following corollary is a straightforward consequence of Proposition \\ref{k0=n-1}\n \\begin{corollary}\\label{pos roots n-1}\n Under the same hypotheses of Proposition \\ref{k0=n-1}, we have that:\\\\\n - If $N_0\\neq 0$ then \n \\begin{itemize}\n \\item[(1)] The roots of $\\Hi(z)$ belong to the canonical strip $\\mathcal{S}_{n-1}$ if and only if $\\mathsf{c}_1^n[\\mathsf{M}]\\geq 0$, or equivalently if and only if $b\\geq -2$.\n \\item[(2)] $\\Hi(z)$ belongs to $\\mathcal{T}_{n-1}$ if and only if $\\;\\;\\;0\\leq \\mathsf{c}_1^n[\\mathsf{M}]\\leq 4N_0 n(n-1)^{n-1}$, or equivalently if and only if $\\;\\;\\;-2\\leq b \\leq 2\\displaystyle\\frac{n+1}{n-1}$.\n \\end{itemize}\n - If $N_0=0$ then the roots of $\\Hi(z)$ do not belong to $\\mathcal{S}_{n-1}$.\n \\end{corollary}\n As a result of the analysis carried out when $\\k0=n-1$, we can strengthen Theorem \\ref{RV theorem}.\n \\begin{corollary}\\label{RV1}\n Under the same hypotheses of Proposition \\ref{k0=n-1}, assume that $N_0=1$ and $n>5$. Then $\\Hi(z)$ belongs to $\\mathcal{T}_{n-1}$ if and only if $\\U(t)$ has its roots on the unit circle. \n \\end{corollary}\n \\begin{proof}\n If $N_0=1$ then by Proposition \\ref{k0=n-1} we know that $b$ is an integer. \nIf $n>5$, from Corollary \\ref{pos roots n-1} we can see that $\\Hi(z)$ belongs to\n$\\mathcal{T}_{n-1}$ if and only if $-2\\leq b\\leq 2$. Since $b$ is an integer, for all such values of $b$ the polynomial $\\U(t)=1+bt+t^2$ has its roots on the unit circle. \n \\end{proof}\n\\begin{rmk}\\label{RV n-1}\nFor $2\\leq n\\leq 5$, we have that $2\\displaystyle\\frac{n+1}{n-1}\\geq 3$; however\nfor $b\\geq 3$, the roots of $\\U(t)$ are not on the unit circle. So for $2\\leq n\\leq 5$,\nthere may exist manifolds whose associated Hilbert polynomial belongs to $\\mathcal{T}_{n-1}$, but the corresponding $\\U(t)=1+bt+t^2$ does not have its roots on the unit circle: consider for example the Fano threefold $V_5$\nin Example \\ref{examples 6} (3), for which $b=3$ and the corresponding Hilbert polynomial is given by $\\Hi_{V_5}(z)=\\frac{1}{6}\\big[5z^2+10z+6\\big](z+1)$. \n\\end{rmk}\n\n\\begin{proof}[Proof of Proposition \\ref{k0=n-1}]\n(a) If $N_0\\neq 0$ then, by Proposition \\ref{properties P} \\eqref{1a} we have that $\\Hi(z)\\not\\equiv 0$. Moreover by \\eqref{Hilbert pol}, if $\\mathsf{c}_1^n[\\mathsf{M}]\\neq 0$ then $\\deg(\\Hi)=n$.\nBy Theorem \\ref{main theorem} \\eqref{H=0 even}, if $n\\geq 3$ $\\Hi(z)$ has roots $-1,-2,\\ldots, -n+2$. By Corollary \n\\ref{property roots}, the remaining two roots belong to $\\mathcal{C}_{n-1}$ and, by Proposition \\ref{properties P} \\eqref{3a},\nthey are of the form $-\\frac{n-1}{2}-x$, $-\\frac{n-1}{2}+x$. Moreover $a:=x^2\\neq \\frac{(n-1)^2}{4}$\nsince by Proposition \\ref{properties P} \\eqref{1a} and \\eqref{3a}, $\\Hi(0)=N_0$, $\\Hi(-n+1)=(-1)^nN_0$ and by assumption $N_0\\neq 0$. Thus $\\Hi(z)=\\alpha \\Big(z^2+(n-1)z+\\frac{(n-1)^2}{4}-a\\Big)\\prod_{j=1}^{n-2}(z+j)$,\nwhere $\\alpha\\in \\mathbb{R}$ can be found by imposing $\\Hi(0)=N_0$, obtaining \\eqref{H n-1 1}.\nEquations \\eqref{c1n n-1} and \\eqref{c1c222} come from combining \\eqref{Hilbert pol} with \\eqref{H n-1 1}.\n\n\n(b) If $N_0\\neq 0$ and $\\mathsf{c}_1^n[\\mathsf{M}]=0$ then, by Proposition \\ref{properties P} \\eqref{1a} we have that $\\Hi(z)\\not\\equiv 0$\nand, by \\eqref{Hilbert pol}, $\\deg(\\Hi)\\leq n-2$.\nBy Theorem \\ref{main theorem}, if $n\\geq 3$ $\\Hi(z)$ has $n-2$ roots given by $-1,-2,\\ldots,-n+2$;\nmoreover if $n=2$ it must be a non-zero constant polynomial. Thus $\\Hi(z)$ has degree $n-2$ and it is of the form\n$\\Hi(z)=\\beta \\prod_{j=1}^{n-2}(z+j)=\\beta\\sum_{h=0}^{n-2}z^h\\sigma_{n-h-2}(1,2,\\ldots,n-2)$. By Proposition \\ref{properties P} \\eqref{1a} we have $\\beta=\\frac{N_0}{(n-2)!}$,\nand \\eqref{H n-1 2} follows.\nEquation \\eqref{c1c2 n-1} can be obtained from \\eqref{H n-1 2} and \\eqref{ah}\nby taking $h=n-2$.\n\nIn order to prove \\eqref{gen fct n-1 1} for $\\mathsf{c}_1^n[\\mathsf{M}]\\neq 0$, observe that since $\\deg(\\Hi)=n$, $N_0\\neq 0$ and $\\k0=n-1$, from Proposition \\ref{gen fct hilbert} and Corollary \\ref{U palindrom} it follows\nthat $\\U(t)=N_0(1+b\\,t+t^2)$ for some $b\\in \\mathbb{R}$. Thus we have that \n$$\n\\Gen(t)=N_0 \\frac{1+b\\,t+t^2}{(1-t)^{n+1}} = N_0\\sum_{k\\geq 0}\\left[ \\binom{n+k-2}{n}+b\\binom{n+k-1}{n}+\\binom{n+k}{n}\\right]t^k\\,, \n$$\nand by definition of $\\Gen(t)$ we have that $N_0(b+n+1)=\\Hi(1)$. Since $\\Hi(1)$ is an integer, it follows that $b\\,N_0$ must be an integer.\nMoreover, by \\eqref{H n-1 1} we have that $\\displaystyle\\frac{\\Hi(1)}{N_0}=\\frac{4(n-1)\\big[n+\\frac{(n-1)^2}{4}-a\\big]}{\\big[(n-1)^2-4a\\big]}=b+n+1$, thus obtaining $b$ in terms of $a$, and the expressions of $\\mathsf{c}_1^n[\\mathsf{M}]$ and $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ in terms of $b$ follow from \\eqref{c1n n-1} and \\eqref{c1c222}.\n\nThe proof of \\eqref{gen fct n-1 1} when $\\mathsf{c}_1^n[\\mathsf{M}]=0$ also follows from Proposition \\ref{gen fct hilbert}, and the details are left to the reader.\n\n(c) If $N_0=0$ then, by Proposition \\ref{properties P} \\eqref{1a} and \\eqref{3a}, and Theorem \\ref{main theorem} \\eqref{H=0 even}, $\\Hi(z)$ has $n$ roots given by $0,-1,-2,\\ldots,-n+1$. If $\\mathsf{c}_1^n[\\mathsf{M}]=0$ then\nby \\eqref{Hilbert pol} and \\eqref{ah} we have that $\\deg(\\Hi)\\leq n-2$, hence $\\Hi(z)\\equiv 0$ and \\eqref{H n-1 3} follows.\nOtherwise $\\Hi(z)=\\gamma \\prod_{j=0}^{n-1}(z+j)$ where the expression for $\\gamma$ can be obtained by using\n\\eqref{Hilbert pol}, imposing that $a_n=\\gamma$.\n\nThe proof of \\eqref{gen fct n-1 3} follows easily from Proposition \\ref{gen fct hilbert}, and the details are left to the reader.\n\n\\end{proof}\n\nProposition \\ref{k0=n-1} implies that the Chern numbers $\\mathsf{c}_1^n[\\mathsf{M}]$ and $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ are related\nby the following formula.\n\\begin{corollary}\\label{relation c122}\nUnder the same hypotheses of Proposition \\ref{k0=n-1} we have that \n$$\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]-\\frac{n(n-3)}{2(n-1)^2}\\mathsf{c}_1^n[\\mathsf{M}] = 12 N_0(n-1)^{n-2}\n$$\n\\end{corollary}\n\\begin{proof}\nWhen $N_0\\neq 0$ the claim follows from \\eqref{c1n n-1 b} and \\eqref{c1c2 n-1 b}.\n\nIf $N_0=0$ and $\\mathsf{c}_1^n[\\mathsf{M}]=0$ then from \\eqref{H n-1 3} we have $\\Hi(z)\\equiv 0$, which, by \\eqref{ah} implies that \n$$a_{n-2}=\\frac{1}{12(n-1)^{n-2}(n-2)!}\\big(\\mathsf{c}_1^n + \\mathsf{c}_1^{n-2}\\mathsf{c}_2\\big)[\\mathsf{M}]=0\\,,$$\nthus implying $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=0$, and the claim follows. \n\nOtherwise, if $N_0\\neq 0$ and $\\mathsf{c}_1^n[\\mathsf{M}]\\neq 0$, from \\eqref{H n-1 3} and \\eqref{sigma 2} we have that \n$a_{n-2}$ is \n\\begin{equation}\\label{an-2}\na_{n-2}=\\gamma \\left[ \\begin{array}{c} n \\\\ n-2 \\end{array} \\right]= \\gamma \\frac{(3n-1)n(n-1)(n-2)}{24}\\,,\n\\end{equation}\nwhere $\\gamma=\\frac{1}{(n-1)^n n!}\\mathsf{c}_1^n[\\mathsf{M}]$,\nand the claim follows from comparing the general expression of $a_{n-2}$ with \\eqref{an-2}.\n\\end{proof}\n\nAs it will be proved in Prop.\\ \\ref{dim 4}, if $(\\M,\\J,S^1)$ is an $S^1$-space of dimension $4$, the index $\\k0$ cannot be zero.\nHence it is not restrictive to assume $n\\geq 3$ for $\\k0=n-2$.\n\\begin{prop}[$\\k0=\\mathbf{n-2}$]\\label{k0=n-2}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space of dimension $2n\\geq 6$ with index $\\k0=n-2$.\n\\begin{itemize}\n\\item[(a)]\\label{n-2 a} If $N_0\\neq 0$ and $\\mathsf{c}_1^n[\\mathsf{M}]\\neq 0$ then \n\\begin{equation}\\label{H n-2 1}\n\\Hi(z)=\\frac{4\\,N_0}{(n-2)!\\big[(n-2)^2-4a\\big]}\\Big(2z+n-2\\Big)\\Big(z^2+(n-2)z+\\frac{(n-2)^2}{4}-a\\Big)\\prod_{j=1}^{n-3}(z+j)\\,,\n\\end{equation}\nwhere $a\\in \\mathbb{R}$ is not equal to $\\frac{(n-2)^2}{4}$. Moreover\n\\begin{equation}\\label{c1n n-2}\n\\mathsf{c}_1^n[\\mathsf{M}]=\\frac{8\\,N_0\\,n(n-1)(n-2)^n}{(n-2)^2-4a}\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{c1c2 n-2 2 rmk}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=\\frac{4N_0(n-2)^{n-2}(24-24a-30n+17n^2-6n^3+n^4)}{(n-2)^2-4a}\\,. \n\\end{equation}\n\\\\\n\n\\item[(b)]\\label{n-2 b} If $N_0\\neq 0$ and $ \\mathsf{c}_1^n[\\mathsf{M}]= 0$ then\n\\begin{equation}\\label{H n-2 2}\n\\Hi(z)=\\frac{N_0}{(n-2)!}\\Big(2z+n-2\\Big)\\prod_{j=1}^{n-3}(z+j)\\,,\n\\end{equation}\nand\n\\begin{equation}\\label{c1c2 n-2}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=24\\, N_0 (n-2)^{n-2}\\,.\n\\end{equation}\n\\end{itemize}\nMoreover, in \\emph{(a)} and \\emph{(b)}, the generating function of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$ is given by \n\\begin{equation}\\label{gen fct n-2}\n\\Gen(t)=N_0\\frac{1+b\\,t+b\\,t^2+t^3}{(1-t)^{n+1}} \n\\end{equation}\nwhere $b$ is such that $b\\,N_0$ is an integer and \n\\begin{equation}\\label{c1n b}\n\\mathsf{c}_1^n[\\mathsf{M}]=2N_0(b+1)(n-2)^n\\,,\n\\end{equation}\n\\begin{equation}\\label{c1c2 b}\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]=N_0(n-2)^{n-2}\\big[24+(b+1)(n-2)(n-3)\\big]\\,,\n\\end{equation}\nand case \\emph{(b)} corresponds to taking $b=-1$.\n\\begin{itemize}\n\\item[(c)]\\label{n-2 c}\nIf $N_0=0$ then \n\\begin{equation}\\label{H n-2 3}\n\\Hi(z)=\\gamma \\Big(z+\\frac{n-2}{2}\\Big)\\prod_{j=0}^{n-2}(z+j)\\,,\n\\end{equation}\nwhere $\\gamma=\\frac{1}{(n-2)^n n!}\\mathsf{c}_1^n[\\mathsf{M}]$.\n\\end{itemize}\nMoreover, the generating function of $\\{\\Hi(k)\\}_{k\\in \\mathbb{N}}$ is given by \n\\begin{equation}\\label{gen fct n-2 3}\n\\Gen(t)=\\frac{\\gamma}{2}n!\\frac{t+t^2}{(1-t)^{n+1}}\\,. \n\\end{equation}\n\n\\end{prop}\n\n\\begin{rmk}\\label{integrality a2}\nThe same comment in Remark \\ref{integrality a} applies here: the value of $a$ cannot be arbitrary, since the following fraction\n$$\n\\frac{8N_0\\,n(n-1)}{(n-2)^2-4a}\n$$\nmust be an integer.\n\\end{rmk}\n\n\nThe following corollary is very similar to Corollary \\ref{pos roots n-1}, and is a straightforward consequence of Proposition \\ref{k0=n-2}\n \\begin{corollary}\\label{pos roots n-2}\n Under the same hypotheses of Proposition \\ref{k0=n-2}, we have that:\\\\\n - If $N_0\\neq 0$ then \n \\begin{itemize}\n \\item[(1)] The roots of $\\Hi(z)$ belong to the canonical strip $\\mathcal{S}_{n-2}$ if and only if $\\mathsf{c}_1^n[\\mathsf{M}]\\geq 0$, or equivalently if and only if $b\\geq -1$.\n \\item[(2)] $\\Hi(z)$ belongs to $\\mathcal{T}_{n-2}$ if and only if $\\;\\;\\;0\\leq \\mathsf{c}_1^n[\\mathsf{M}]\\leq 8N_0 n(n-1)(n-2)^{n-2}$, or equivalently if and only if $\\;\\;\\;-1\\leq b \\leq \\displaystyle\\frac{3n^2-4}{(n-2)^2}$.\n \\end{itemize}\n - If $N_0=0$ then the roots of $\\Hi(z)$ do not belong to $\\mathcal{S}_{n-2}$.\n \\end{corollary}\nIn analogy with Corollary \\ref{RV1}, we have the following:\n\\begin{corollary}\\label{RV2}\n Under the same hypotheses of Proposition \\ref{k0=n-2}, assume that $N_0=1$ and $n>14$. Then $\\Hi(z)$ belongs to $\\mathcal{T}_{n-2}$ if and only if $\\U(t)$ has its roots on the unit circle. \n \\end{corollary}\n\\begin{proof}\nIf $N_0=1$ then by Proposition \\ref{k0=n-2}, we know that $b$ is an integer. If $n>14$, from Corollary \\ref{pos roots n-2} we can see that $\\Hi(z)$ belong to\n$\\mathcal{T}_{n-2}$ if and only if $-1\\leq b\\leq 3$. Since $b$ is an integer, for all such values of $b$ the polynomial $\\U(t)=1+bt+bt^2+t^3$ has its roots on the unit circle.\n\\end{proof}\n\\begin{rmk}\\label{RV n-2}\nFor $3\\leq n\\leq 14$, we have that $\\displaystyle\\frac{3n^2-4}{(n-2)^2}\\geq 4$; however\nfor $b\\geq 4$, the roots of $\\U(t)=1+bt+bt^2+t^3$ are not on the unit circle. In conclusion, we can say that for $3\\leq n\\leq 14$,\nthere may exist manifolds whose associated Hilbert polynomial belongs to $\\mathcal{T}_{n-2}$, but the corresponding $\\U(t)$ does not have its roots on the unit circle: consider for example the Fano threefold $V_{22}$\nin Example \\ref{examples 6-1} (2), for which $b=10$ and the corresponding Hilbert polynomial is given by $\\Hi_{V_{22}}(z)=\\frac{1}{6}\\big[11z^2+11z+6\\big](2z+1)$.\n\\end{rmk}\n\n\\begin{proof}[Proof of Proposition \\ref{k0=n-2}]\n\nThe proof of this Proposition is very similar to that of Proposition \\ref{k0=n-1}, and here we only sketch the first part.\n(a) If $N_0\\neq 0$ then, by Proposition \\ref{properties P} \\eqref{1a} we have that $\\Hi(z)\\not\\equiv 0$. Moreover by \\eqref{Hilbert pol}, if $ \\mathsf{c}_1^n[\\mathsf{M}]\\neq 0$ then $\\deg(\\Hi)=n$.\nBy Theorem \\ref{main theorem} \\eqref{H=0 even}, for $n\\geq 4$ $\\Hi(z)$ has roots $-1,-2,\\ldots, -n+3$. By Corollary \\ref{extra root -k02} one of the remaining three roots is $-\\frac{n-2}{2}$. By Corollary \\ref{property roots} the remaining two roots are on $\\mathcal{C}_{n-2}$, and\nby Proposition \\ref{properties P} \\eqref{3a} they are of the form $-\\frac{n-2}{2}-x$, $-\\frac{n-2}{2}+x$, for some $x\\in \\mathbb{R}$.\nMoreover $a:=x^2\\neq \\frac{(n-2)^2}{4}$\nsince by Proposition \\ref{properties P} \\eqref{1a} and \\eqref{3a}, $\\Hi(0)=N_0$, $\\Hi(-n+2)=(-1)^nN_0$ and by assumption $N_0\\neq 0$. \nIt follows that the Hilbert polynomial is of the form\n$$\n\\Hi(z)=\\alpha \\Big(2z+n-2\\Big)\\Big(z^2+(n-2)z+\\frac{(n-2)^2}{4}-a\\Big)\\prod_{j=1}^{n-3}(z+j)\\,,\n$$\nwhere $\\alpha$ can be found by imposing $\\Hi(0)=N_0$, thus obtaining \\eqref{H n-2 1}. \nThe rest of the proof is left to the reader. \n\\end{proof}\n\nSimilarly to the case $\\k0=n-1$, Proposition \\ref{k0=n-2} implies that the Chern numbers $\\mathsf{c}_1^n[\\mathsf{M}]$ and\n$\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ are related by the following formula.\n\\begin{corollary}\\label{relation c122 2}\nUnder the same hypotheses of Proposition \\ref{k0=n-2} we have that \n$$\n\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]-\\frac{n-3}{2(n-2)}\\mathsf{c}_1^n[\\mathsf{M}]= 24 N_0 (n-2)^{n-2}\n$$\n\\end{corollary}\n\\begin{proof}\nThe proof of this Corollary is very similar to that of Corollary \\ref{relation c122}, and the details are left to the reader.\n\\end{proof}\n\nAs a consequence of the analysis of $\\Hi(z)$ when the index $\\k0$ is $n-2$ or $n$ we have the following\n\\begin{corollary}\\label{cor even chern classes}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space with $N_0\\neq 0$. Assume the index satisfies either $\\k0=n$, or $\\k0=n-2$ and $n\\geq 3$.\nThen the Chern numbers $\\mathsf{c}_1^n[\\mathsf{M}]$ and $\\mathsf{c}_1^{n-2}\\mathsf{c}_2[\\mathsf{M}]$ are always \\emph{even}.\n\\end{corollary}\n\\begin{proof}\nWhen $\\k0=n$ the claim follows from Proposition \\ref{cor n} \\eqref{c1 n} and \\eqref{c1c22},\nand when $\\k0=n-2$ it follows from Proposition \\ref{k0=n-2} \\eqref{c1n b} and \\eqref{c1c2 b}.\n\\end{proof}\n\n\nThe case in which $\\k0={\\bf n-3}$, where $n\\geq 4$, is not analysed in details here. However we would like to make some remarks about it when $N_0\\neq 0$\nand $\\deg(\\Hi)=n$, i.e.\\;$ \\mathsf{c}_1^n[\\mathsf{M}]\\neq 0$.\nFirst of all, observe that this is the first case in which the roots of $\\Hi(z)$ may not belong to $\\mathcal{C}_{\\k0}$ (see Corollary \\ref{position roots}). From Theorem \\ref{main theorem} \\eqref{H=0 even}, the roots of $\\Hi(z)$ are $-1,-2,\\ldots,-n+4$ (if $n>4$), plus four additional roots $z_1,z_2,z_3,z_4$. If the remaining four roots don't belong to $\\mathcal{C}_{\\k0}$, \nfrom the properties of $\\Hi(z)$ they must be of the form $-\\frac{n-3}{2}\\pm a\\pm {\\bf i}\\, b$, for some $a,b\\in \\mathbb{R}\\setminus \\{0\\}$, thus obtaining that\n\\begin{equation}\\label{k0=n-3}\n\\Hi(z)=\\alpha \\prod \\Big(z+\\frac{n-3}{2}\\pm a\\pm {\\bf i}\\,b\\Big)\\prod_{j=1}^{n-4}(z+j)\\,.\n\\end{equation}\nFrom the expression of $a_n$ in \\eqref{ah} and Proposition \\ref{properties P} \\eqref{1a} it follows that \n\\begin{equation}\\label{cassini}\n\\Big[\\Big(\\frac{n-3}{2}-a\\Big)^2+b^2\\Big]\\Big[\\Big(\\frac{n-3}{2}+a\\Big)^2+b^2\\Big]=\\frac{N_0\\,n!\\,(n-3)^n}{\\,(n-4)!\\,\\mathsf{c}_1^n[\\mathsf{M}]}\\,,\n\\end{equation}\nwhich implies that $\\mathsf{c}_1^n[\\mathsf{M}]>0$. Moreover, for a fixed value of $\\mathsf{c}_1^n[\\mathsf{M}]$, the four roots $z_1,\\ldots,z_4$ \nbelong to the \\emph{Cassini oval}\\footnote{We recall that a \\emph{Cassini oval} is a quartic plane curve given by the locus of points in $\\mathbb{R}^2\\simeq {\\mathbb{C}}$ satisfying the equation $$\\mathrm{d}(p,q_1)\\,\\mathrm{d}(p,q_2)=d^2\\,,$$ where $d\\neq 0$. The points $q_1$ and $q_2$ are called the \\emph{foci} of the Cassini oval.} of equation\n\\begin{equation}\\label{cassini eq}\n\\mathrm{d}(p,0)\\,\\mathrm{d}(p,-n+3)=\\sqrt{\\frac{N_0\\,n!\\,(n-3)^n}{\\,(n-4)!\\,\\mathsf{c}_1^n[\\mathsf{M}]}}\n\\end{equation}\nwhere $\\mathrm{d}(p,q)$ denotes the Euclidean distance from $p$ to $q$, with $p,q\\in\\mathbb{R}^2\\simeq {\\mathbb{C}}$, and the foci of this oval are the points $0$ and $-n+3$ (see Figure \\ref{Fig:cassini}).\n\n\n\\begin{figure}[h!]\n\\begin{center}\n\\epsfxsize=\\textwidth\n\\leavevmode\n\\includegraphics[width=3.5in]{Cassini-final}\n\\end{center}\n\\caption{Examples of \\emph{Cassini ovals} of equation $\\mathrm{d}(p,q_1)\\,\\mathrm{d}(p,q_2)=d^2$ with foci $q_1=(0,0)$ and $q_2=(-4,0)$ for different values of $d$.\nThe curve passing through the origin is called the \\emph{lemniscate of Bernoulli}, and is obtained for $d=4$.}\n\\label{Fig:cassini}\n\\end{figure}\n\n\n\\medskip\n\\subsection{Conclusions on Hamiltonian and non-Hamiltonian actions.}\nAs an application of the results obtained before, we conclude the section with the proof of Theorem \\ref{nHam-char}. Observe that for $n=1$ and $n=2$ there do not exist symplectic non-Hamiltonian circle actions with nonempty discrete fixed point sets: for $n=1$ the only compact surface admitting such a symplectic circle action is a sphere, hence the action is Hamiltonian; for $n=2$ the assertion was proved by McDuff in \\cite[Proposition 2]{MD1}.\n\\begin{proof}[Proof of Theorem \\ref{nHam-char}]\nWe recall that in the symplectic case $N_0$ can be either $0$ or $1$, and it is $0$ exactly if the action is non-Hamiltonian (see\nLemma \\ref{N0 1}). \nThen the claims in (I) follow from Corollary \\ref{bound on k0 s} ({\\bf i'}), those\nin (II) and (III) from Propositions \\ref{cor n+1} and \\ref{cor n}, and those in (IV) and (V) from Corollaries \\ref{relation c122} and \\ref{relation c122 2}.\n\\end{proof}\n\\begin{rmk}\\label{other comb}\nObserve that \nby Propositions \\ref{cor n+1} and \\ref{cor n}, when $\\k0=n+1$ or $\\k0=n$ the action is Hamiltonian if and only if \\emph{all} the combinations of \nChern numbers $\\mathsf{c}_1^h\\,T_{n-h}[\\mathsf{M}]$ do not vanish, for $h=0,\\ldots,n$.\n\\end{rmk}\n\\section{Examples: low dimensions of $(\\mathsf{M},\\mathsf{J})$}\\label{examples}\n\nIn this section, we study some consequences of the results previously obtained for\n$n\\leq 4$.\nIn particular we prove that when $\\k0=n$ or $n+1$ then \\emph{all the Chern numbers of $(\\mathsf{M},\\mathsf{J},S^1)$ can be expressed as a linear combination of the $N_j$'s}, where $N_j$ denotes the number of fixed points with exactly $j$ negative weights. In the Hamiltonian category, this amounts to saying that \\emph{all the Chern numbers of $(\\mathsf{M},\\omega, S^1)$ can be expressed as linear combinations of the Betti numbers of }$\\mathsf{M}$ (see \\eqref{bi=Ni}).\n\nThe most obvious Chern number that can always be written in terms of the $N_j$'s \nis $\\mathsf{c}_n[\\mathsf{M}]$. In fact, by definition of the $N_j$'s and $\\mathsf{c}_n[\\mathsf{M}]=|\\mathsf{M}^{S^1}|$, we have \n\\begin{equation}\\label{eq cn}\n\\mathsf{c}_n[\\mathsf{M}]=\\sum_{j=0}^n N_j\\,.\n\\end{equation}\nIn \\cite{GoSa}, Godinho and the author proved that the Chern number $\\mathsf{c}_1\\mathsf{c}_{n-1}[\\mathsf{M}]$ can also be expressed in terms of the $N_j$'s. \nWe recall its explicit expression in the following\n\\begin{theorem}[\\cite{GoSa} Theorem 1.2]\\label{nostro}\n Let $(\\mathsf{M},\\mathsf{J}, S^1)$ and $N_j$ be as above. Then \n \\begin{equation}\\label{c1cn-1}\n \\mathsf{c}_1\\mathsf{c}_{n-1}[\\mathsf{M}]=\\sum_{j=0}^n N_j \\Big[6j(j-1)+\\frac{5n-3n^2}{2}\\Big]\\,.\n \\end{equation}\n\n\\end{theorem}\n\nSuppose that $(\\M,\\J,S^1)$ is an $S^1$-space\nof (real) dimension $2$. As also observed before, since we are requiring isolated fixed points,\nsuch a space must be a $2$-sphere, obtaining \n $\\k0=2$, $\\Hi(z)=1+z$ and $\\mathsf{c}_1[S^2]=2$. \n\n\\subsection{$\\mathbf{\\dim(\\mathsf{M})=4}$} First of all, observe that by \\eqref{NiN} and \\eqref{eq cn} we have\n\\begin{equation}\\label{c2 dim 4}\n \\mathsf{c}_2[\\mathsf{M}]=2N_0+N_1\\,.\n\\end{equation}\nMoreover, \nby \\eqref{NiN} and Theorem \\ref{nostro} \\eqref{c1cn-1}, for $n=2$ it follows that\n\\begin{equation}\\label{c1 n=2}\n\\mathsf{c}_1^2[\\mathsf{M}]=10 N_0 - N_1\\,.\n\\end{equation}\nThus in dimension $4$ all the Chern numbers can be expressed as a linear combination of the $N_j$'s (independently on $\\k0$).\n\\begin{rmk}\\label{pos 1}\nObserve that the necessary condition $\\mathsf{c}_1^2+\\mathsf{c}_2[\\mathsf{M}]\\equiv 0 \\mod{12}$, which must hold for\nany compact almost complex manifold, for $S^1$-spaces becomes \n$\\mathsf{c}_1^2+\\mathsf{c}_2[\\mathsf{M}]=12 N_0$ (it is equivalent to saying that the Todd genus is $N_0$). Hence, \nfor $(\\M,\\J,S^1)$, the combination of Chern numbers $\\mathsf{c}_1^2+\\mathsf{c}_2[\\mathsf{M}]$ must be a \\emph{non-negative} multiple of $12$.\n\\end{rmk}\nThe following corollary is an easy consequence of the results obtained before, applied to the symplectic category:\n\\begin{corollary}\\label{geo s}\nLet $(\\mathsf{M},\\omega)$ be a compact, connected symplectic manifold of dimension $4$ that can be endowed with a symplectic circle action with isolated fixed points.\nThen \n\\begin{equation}\\label{c1 n2 h}\n(\\mathsf{c}_1^2[\\mathsf{M}], \\mathsf{c}_2[\\mathsf{M}])=(10-b_2(\\mathsf{M}), 2+ b_2(\\mathsf{M}))\\,.\n\\end{equation}\nMoreover, any pair of integers $(p,q)$ satisfying $p+q=12$ and $p\\leq 9$ can be realized as the pair of Chern numbers $(\\mathsf{c}_1^2[\\mathsf{M}], \\mathsf{c}_2[\\mathsf{M}])$\nof a compact, connected symplectic manifold $\\mathsf{M}$ of dimension $4$ supporting a symplectic circle action with isolated fixed points.\n\\end{corollary}\n\\begin{proof}\nGiven $(\\mathsf{M},\\omega,S^1)$ of dimension $4$, a theorem of McDuff \\cite{MD1} implies that the action is Hamiltonian, and as a consequence of \\eqref{c1 n=2}, Lemma \\ref{N0 1}\nand \\eqref{bi=Ni} we obtain \\eqref{c1 n2 h}.\nThe second assertion follows from observing that $b_2(\\mathsf{M})$ is positive, it is at least one (consider $({\\mathbb{C}} P^2,\\omega_{F},S^1)$, see Example \\ref{ch}), and can be arbitrarily large: to obtain\n$(\\mathsf{M},\\omega,S^1)$ with $b_2(\\mathsf{M})=k$, it is sufficient to perform $(k-1)$ times an $S^1$-equivariant blow-up on $({\\mathbb{C}} P^2,\\omega_{F},S^1)$.\n\\end{proof}\n\\begin{exm}\\label{ch}[{\\bf The complex projective space and Hirzebruch surfaces}]\nConsider ${\\mathbb{C}} P^2$ endowed with a multiple of the Fubini-Study form $\\omega_F$, and `standard' $S^1$-action, namely $S^1$ is a circle subgroup of a 2-dimensional torus $\\mathbb{T}^2$ acting on ${\\mathbb{C}} P^2$\nin a toric way. Thus the $S^1$-action is given by\n$\\alpha \\cdot [z_0:z_1:z_2]=[z_0:\\alpha^l z_1:\\alpha^{l+m}z_2]$\nfor every $\\alpha\\in S^1$ (where $l$ and $m$ are non-zero, coprime integers) it has three fixed points and is Hamiltonian. Note that the minimal Chern number of ${\\mathbb{C}} P^2$ is $3$.\nWe denote this $S^1$-space by $({\\mathbb{C}} P^2,\\lambda \\,\\omega_F,S^1)_{l,m}$, where $\\lambda \\in \\mathbb{R}_{>0}$.\n\nFor every $k\\in {\\mathbb{Z}}$, let $\\mathcal{H}_k$ be the Hirzebruch surface: $\\{([z_0:z_1:z_2],[w_1:w_2])\\in {\\mathbb{C}} P^2 \\times {\\mathbb{C}} P^1\\mid z_1\\,w_2^k=z_2\\,w_1^k\\}$, endowed\nwith symplectic form $\\widetilde{\\omega}$ induced by multiples of the Fubini-Study forms on ${\\mathbb{C}} P^2$ and ${\\mathbb{C}} P^1$. We can give each $\\mathcal{H}_k$ an $S^1$-action, defined by:\n $\\alpha\\cdot ([z_0:z_1:z_2],[w_1:w_2])=([\\alpha^l z_0:z_1:\\alpha^{k\\,m}z_2)],[w_1,\\alpha^m w_2])$, where $l$ and $m$ are non-zero, coprime integers. This action has $4$ fixed points and is Hamiltonian.\n We denote these $S^1$-spaces by $(\\mathcal{H}_k,\\widetilde{\\omega},S^1)_{l,m}$.\n Note that the minimal Chern number of $\\mathcal{H}_k$ is $1$ if $k$ is odd and $2$ if $k$ is even, and $\\mathcal{H}_k$ is respectively called an \\emph{odd} or \\emph{even} Hirzebruch surface.\n \n\\end{exm}\n\\begin{rmk}\\label{minimal spaces}\nThe examples above are exactly the \\emph{minimal spaces} obtained in the classification of $(\\mathsf{M},\\omega,S^1)$ of dimension $4$ (if the fixed point set is not discrete, there is an additional class of minimal spaces\ngiven by ${\\mathbb{C}} P^1$-bundles over Riemann surfaces of genus $g\\geq 1$), see \\cite{AH,Au} and \\cite{K}. More precisely, in \\cite{K} Karshon proves that every $(\\mathsf{M},\\omega,S^1)$ is equivariantly symplectomorphic to a symplectic\n$S^1$-space obtained from $({\\mathbb{C}} P^2,\\lambda \\,\\omega_F,S^1)_{l,m}$ or\n $(\\mathcal{H}_k,\\widetilde{\\omega},S^1)_{l,m}$ (for suitable $\\lambda, l,m,k$ as above) by a sequence of $S^1$-equivariant blow-ups at fixed points.\\footnote{Note that the blow-up of $({\\mathbb{C}} P^2,\\lambda \\,\\omega_F,S^1)_{l,m}$\n at one fixed point is an odd Hirzebruch surface.} \n \\end{rmk}\n\\begin{rmk}\\label{ci 2}\nObserve that for every $S^1$-space $(\\mathsf{M},\\omega,S^1)$ of dimension $4$ the following inequality holds:\n\\begin{equation}\\label{ci 12}\n\\mathsf{c}_1^2[\\mathsf{M}]\\leq 3\\mathsf{c}_2[\\mathsf{M}]\\,.\n\\end{equation}\nIndeed, Corollary \\ref{geo s} implies that \\eqref{ci 12} is equivalent to $b_2(\\mathsf{M})\\geq 1$.\nNote that \\eqref{ci 12} was conjectured by Van de Ven \\cite{V}\nand proved by Miyaoka \\cite{Mi}\nfor (complex) surfaces of general type. \n\nThe following question is then natural:\n\\begin{question}\\label{inac?}\nLet $(\\M,\\J,S^1)$ be an $S^1$-space. Does inequality \\eqref{ci 12} hold? \n\\end{question}\n\\end{rmk}\nBy \\eqref{c2 dim 4} and \\eqref{c1 n=2}, proving inequality \\eqref{ci 12} for $(\\M,\\J,S^1)$ is equivalent to proving that for every such space $N_0\\leq N_1$. \nThe next proposition \nimplies that the answer to question \\ref{inac?} is `yes' for all $4$-dimensional $S^1$-spaces whose index is not one.\n\n\\begin{prop}\\label{dim 4}\nLet $(\\mathsf{M},\\mathsf{J},S^1)$ be an $S^1$-space of dimension $4$, and let\n$\\k0$, $\\Hi(z)$ and the $N_j$'s be defined as before.\nThen $N_0, N_1$ and $N_2$ are all non-zero, the first Chern class $\\mathsf{c}_1$ is not a torsion element in $H^2(\\mathsf{M};{\\mathbb{Z}})$, and $\\k0\\in \\{1,2,3\\}$. Moreover \n\\begin{itemize}\n\\item[(a)]If $\\k0=3$ then\n\\begin{equation}\\label{dim4 1}\nN_0=N_1=N_2,\\;\\;\\;\\quad \\mathsf{c}_1^2[\\mathsf{M}]=9N_0\\;\\;\\;\\quad \\mbox{and}\\;\\;\\; \\quad \\Hi(z)=\\frac{N_0}{2}(z+1)(z+2).\n\\end{equation}\n\\item[(b)] If $\\k0=2$ then\n\\begin{equation}\\label{dim4 2}\n2N_0=N_1=2N_2,\\;\\;\\;\\quad \\mathsf{c}_1^2[\\mathsf{M}]=8N_0\\;\\;\\;\\quad \\mbox{and}\\;\\;\\; \\quad \\Hi(z)=N_0(z+1)^2. \n\\end{equation}\n\\end{itemize}\n$\\;$\\\\\nGiven $(\\mathsf{M},\\omega,S^1)$ of dimension $4$ we have that\n\\begin{itemize}\n \\item[(a')] $\\k0=3$ if and only if there exists $\\lambda>0$ and coprime integers $l,m$ such that $(\\mathsf{M},\\omega,S^1)$ is equivariantly symplectomorphic to $({\\mathbb{C}} P^2,\\lambda\\,\\omega_F,S^1)_{l,m}$.\n\\item[(b')] $\\k0=2$ if and only if there exists coprime integers $l,m$, an even $k\\in {\\mathbb{Z}}$ and a symplectic form $\\widetilde{\\omega}$ on $\\mathcal{H}_k$ such that $(\\mathsf{M},\\omega,S^1)$ is equivariantly symplectomorphic to $(\\mathcal{H}_k,\\widetilde{\\omega},S^1)_{l,m}$.\n\\end{itemize}\n\\end{prop}\n\n\\begin{proof}\nLet $p$ be a fixed point, and $e^{S^1}(p)\\in H_{S^1}^2(\\{p\\};{\\mathbb{Z}})={\\mathbb{Z}}[x]$ the equivariant Euler class of the normal bundle at $p$, which is simply given by $w_{1p}w_{2p}x$, where $w_{1p}$ and $w_{2p}$ are the weights of the isotropy $S^1$-action at $p$. By the ABBV formula (Thm.\\ \\ref{abbv formula}) we must have\n\\begin{equation}\\label{ABBV}\n\\sum_{p\\in \\mathsf{M}^{S^1}}\\frac{1}{e^{S^1}(p)}=1[\\mathsf{M}]=0\\,.\n\\end{equation} \nSo it follows that $\\mathsf{M}^{S^1}$ must contain points whose product of the corresponding weights is positive, as well as those for which it is negative. Thus $N_0+N_2\\neq 0$ which, together with \\eqref{NiN}, implies that $N_0$ and $N_2$ are non-zero,\n and $N_1\\neq 0$ . From Lemma \\ref{c1 N0} (a2) it follows that $\\mathsf{c}_1$ is not a torsion element in $H^2(\\mathsf{M};{\\mathbb{Z}})$, and by Corollary \\ref{bound on k0} ({\\bf i}) that $\\k0\\in \\{1,2,3\\}$. \n\n\nIf $\\k0=3$, by Proposition \\ref{cor n+1} \\eqref{c1 n+1} we have $\\mathsf{c}_1^2[\\mathsf{M}]=9 N_0$ which, together with \\eqref{c1 n=2} and \\eqref{NiN},\nimplies $N_0=N_1=N_2$. The expression for the Hilbert polynomial follows immediately from Proposition \\ref{cor n+1}.\nThe claims in (b) follow similarly by using Proposition \\ref{cor n}.\n\nSuppose that $(\\mathsf{M},\\omega)$ and the $S^1$-action are symplectic. By Lemma \\ref{N0 1} and the fact that $N_0\\neq 0$ we have that the action is Hamiltonian and $N_0=1$ (this also reproves\nMcDuff's theorem \\cite{MD1} in the case in which the fixed point set is discrete).\nObserve that blowing-up at one fixed point increases the second Betti number $b_2$ by $1$. It follows that the two families of minimal spaces in Remark \\ref{minimal spaces}\n are the only compact, connected symplectic manifolds of dimension $4$ that can be endowed with a symplectic circle action with isolated fixed points, with $b_2\\leq 2$.\nIf $\\k0=3$, from (a) and \\eqref{bi=Ni} we have that $b_0(\\mathsf{M})=b_2(\\mathsf{M})=b_4(\\mathsf{M})=1$, and (a') follows from the classification in \\cite{K}.\nIf $\\k0=2$, from (b) we have that $b_0(\\mathsf{M})=b_4(\\mathsf{M})=1$ and $b_2(\\mathsf{M})=2$, and the claim in (b') follows as well from \\cite{K}.\n\n\\end{proof}\n\\begin{rmk}\\label{not n}\n\\begin{enumerate}\n\\item Since symplectic $S^1$-spaces of dimension $4$ are completely classified, the claims in (a') and (b') also follow from the classification in \\cite{K}. However we would like to\npoint out that Proposition \\ref{dim 4} (a) and (b) implies immediately that for $\\k0=3$ the Betti numbers of $(\\mathsf{M},\\omega,S^1)$ are exactly those of ${\\mathbb{C}} P^2$, and for $\\k0=2$ they are exactly those of a Hirzebruch surface.\n\\item The numbers $\\lambda,l,m$ appearing in (a') are determined by the `Karshon graph' $\\Gamma$ associated to \n$(\\mathsf{M},\\omega,S^1)$, as described carefully in \\cite{K}; a similar conclusion holds for the case in (b').\n\\end{enumerate}\n\\end{rmk}\n\nIf $\\k0=1$, Proposition \\ref{k0=n-1} implies that the Hilbert polynomial\ndepends on the value of $ \\mathsf{c}_1^2[\\mathsf{M}]$. It is interesting to study the position of the roots of $\\Hi(z)$ in terms of \n $\\beta=\\frac{N_1}{N_0}$. Observe that, by Proposition \\ref{dim 4}, $\\beta>0$ and if the action is Hamiltonian (and the manifold is connected) then $\\beta=b_2(\\mathsf{M})$.\nFrom the definition of Hilbert polynomial of $(\\mathsf{M},\\mathsf{J})$ and \\eqref{c1 n=2} (see Proposition \\ref{k0=n-1}), it is immediate to see that \n \\begin{equation}\\label{Hil n=2 k0=1}\n \\Hi(z)=\\frac{N_0}{2}\\big[(10-\\beta)z^2+(10-\\beta)z+2\\big]\\,.\n \\end{equation}\n Thus for $\\beta\\neq 10$ the roots, which are of the form $-\\frac{1}{2}\\pm a$ with $a$ either real or pure imaginary, have the following position:\n \\begin{itemize}\n \\item for $0<\\beta<2$ or $\\beta>10$ they are real and distinct;\n \\item for $\\beta=2$ they are real and coincide;\n \\item for $2<\\beta<10$ they live on the axis $-\\frac{1}{2}+iy$, for $y\\in \\mathbb{R}\\setminus\\{0\\}$.\n \\end{itemize}\n Moreover when $\\left\\| \\mathsf{c}_1^2[\\mathsf{M}]\\right\\|\\to +\\infty$, or equivalently when $\\beta\\to +\\infty$, the roots cluster around the ``foci\" $0$ and $-1$.\n\nObserve that by Proposition \\ref{dim 4}, in the symplectic case it is impossible to have $\\k0=1$ and $\\beta=b_2(\\mathsf{M})\\leq 2$. \nMoreover, we can have manifolds with $b_2(\\mathsf{M})$ arbitrarily large; it is sufficient to blow-up ${\\mathbb{C}} P^2$ as many times\nas we want.\n\n\\subsection{$\\mathbf{\\dim(\\mathsf{M})=6}$} \nNow suppose that $\\dim(\\mathsf{M})=6$. As a consequence of \\eqref{NiN} and \\eqref{eq cn} we have that\n\\begin{equation}\\label{c3 6}\n\\mathsf{c}_3[\\mathsf{M}]=2(N_0+N_1)\\,,\n\\end{equation}\nand, as a direct consequence of Theorem \\ref{nostro}, that\n\\begin{equation}\\label{c1c2 6}\n\\mathsf{c}_1\\mathsf{c}_2[\\mathsf{M}]=24\\,N_0\\,.\n\\end{equation}\n\\begin{rmk}\\label{pos 2}\nIn dimension $6$ the congruences that must be satisfied by the Chern numbers are\n$\\mathsf{c}_1\\mathsf{c}_2[\\mathsf{M}]\\equiv 0 \\mod{24}$, and $\\mathsf{c}_1^3[\\mathsf{M}]\\equiv \\mathsf{c}_3[\\mathsf{M}]\\equiv 0 \\mod{2}$. Equations \\eqref{c3 6} and \\eqref{c1c2 6} show that \nfor $S^1$-spaces $\\mathsf{c}_1\\mathsf{c}_2[\\mathsf{M}]$ is always a \\emph{non-negative} multiple of $24$, and $\\mathsf{c}_3[\\mathsf{M}]$ a \\emph{positive} multiple of $2$. However\nour method does not give (in)equalities for $\\mathsf{c}_1^3[\\mathsf{M}]$, unless $\\k0=3,4$, see Proposition \\ref{dim 6}.\n\\end{rmk}\nThe following proposition \nfollows immediately from\nPropositions \\ref{cor n+1}, \\ref{cor n} and Lemma \\ref{N0 1}:\n\\begin{prop}[$\\mathbf{\\dim(\\mathsf{M})=6},\\;\\k0=3,4$]\\label{dim 6}\nLet $(\\mathsf{M},\\mathsf{J},S^1)$ be an $S^1$-space of dimension $6$, and let\n$\\k0$, $\\Hi(z)$ and the $N_j$'s be defined as before. \n\\begin{itemize}\n\\item[(a)] If $\\k0=4$ then \n$$\n\\mathsf{c}_1^3[\\mathsf{M}]=64 N_0\\quad\\;\\;\\;\\mbox{and}\\;\\;\\;\\quad \\Hi(z)=\\frac{N_0}{6}(z+1)(z+2)(z+3).\n$$\n\\item[(b)] If $\\k0=3$ then\n$$\n\\mathsf{c}_1^3[\\mathsf{M}]=54 N_0\\quad\\;\\;\\;\\mbox{and}\\;\\;\\;\\quad \\Hi(z)=\\frac{N_0}{6}(2z+3)(z+1)(z+2).\n$$\n\\end{itemize} \nIf we are given $(\\mathsf{M},\\omega,S^1)$ of dimension $6$ we have that: \n\\begin{itemize}\n\\item[(i)] If the action is Hamiltonian, then $\\k0=4$ implies $(\\mathsf{c}_1^3[\\mathsf{M}],\\mathsf{c}_1\\mathsf{c}_2[\\mathsf{M}])=(64,24)$, and \n$\\k0=3$ implies $(\\mathsf{c}_1^3[\\mathsf{M}],\\mathsf{c}_1\\mathsf{c}_2[\\mathsf{M}])=(54,24)$.\n\\item[(ii)] If the action is non-Hamiltonian, then for all $\\k0\\geq 3$ we have $(\\mathsf{c}_1^3[\\mathsf{M}],\\mathsf{c}_1\\mathsf{c}_2[\\mathsf{M}])=(0,0)$.\n\\end{itemize}\n\\end{prop}\n\nWhen $\\k0<3$, the Chern number $ \\mathsf{c}_1^3[\\mathsf{M}]$ and the Hilbert polynomial $\\Hi(z)$ are not determined by the index, $N_0$ and $N_1$ (see Remark \\ref{not same}).\nFor example, if $\\k0=2$ then from Proposition \\ref{k0=n-1} it follows that for $N_0\\neq 0$ and $ \\mathsf{c}_1^3[\\mathsf{M}]\\neq 0$ we have \n\\begin{equation}\\label{k0=1,2}\n\\mathsf{c}_1^3[\\mathsf{M}]=\\frac{48 N_0}{1-a}\\quad\\mbox{and}\\quad \\Hi(z)=\\frac{N_0}{1-a}\\big[z^2+2z+1-a\\big](z+1)\n\\end{equation}\nwhere $a\\neq 1$. \nThus the roots of $\\Hi(z)\/(z+1)$ are real exactly if $ \\mathsf{c}_1^3[\\mathsf{M}]\\geq 48\\,N_0\\;\\;$ or $\\;\\;\\mathsf{c}_1^3[\\mathsf{M}]<0$.\nMoreover they cluster around the ``foci\" $0$ and $-2$ exactly if $\\left\\| \\mathsf{c}_1^3[\\mathsf{M}]\\right\\|\\to +\\infty$.\n\\begin{exm}\\label{examples 6} In the following we give examples of manifolds of dimension $6$ with $\\k0=2$, together with their associated Hilbert polynomials.\n\\begin{itemize}\n\\item[(1)] \\emph{The flag variety }$\\mathcal{F}l({\\mathbb{C}}^3)=:\\mathcal{F}$. The variety of complete flags in ${\\mathbb{C}}^3$ is a compact symplectic (indeed K\\\"ahler) manifold of dimension $6$ which can be endowed with a Hamiltonian $S^1$-action with exactly $6$ fixed points; for details about the action see \\cite[Example 5.5]{GoSa} and the discussion preceding it. The reader can verify that the definition of $\\k0$ given here coincides with that of $C$ given in \\cite{GoSa}, hence \n$\\k0=2$. Moreover $\\mathsf{c}_1^3[\\mathcal{F}]=48$, and the Hilbert polynomial is $\\Hi_{\\mathcal{F}}(z)=(z+1)^3$.\n\\item[(2)] \\emph{The product of spheres $S^2\\times S^2\\times S^2=:\\mathcal{S}$}. This is a compact symplectic (indeed K\\\"ahler) manifold which can be endowed with a Hamiltonian $S^1$-action with exactly $2^3=8$ fixed points. Moreover it can be checked that\n$\\mathsf{c}_1^3[\\mathcal{S}]=48$, and the Hilbert polynomial is $\\Hi_{\\mathcal{S}}(z)=(z+1)^3$.\n\\item[(3)] \\emph{The Fano threefold $V_5$} (for details see \\cite{M, T1} or \\cite[Example 6.14]{GoSa}). This is a Fano manifold which can be endowed with a Hamiltonian $S^1$-action with exactly $4$ fixed points. The cohomology ring is given by ${\\mathbb{Z}}[x,y]\/\\langle x^2-5y,y^2 \\rangle$ (where $x$ has degree $2$, and $y$ degree $4$), $\\k0=2$ and $\\mathsf{c}_1=2x$. Thus $\\mathsf{c}_1^3[V_5]=40$,\nand the Hilbert polynomial is $\\Hi_{V_5}(z)=\\frac{1}{6}\\big[5z^2+10z+6\\big](z+1)$.\n\\item[(4)] \\emph{A non-K\\\"ahler example} $n\\mathcal{K}$. In \\cite{T1}, Tolman constructs a $6$-dimensional compact symplectic manifold which supports a Hamiltonian action of a $2$-dimensional torus $T$\nwith isolated fixed points, but does not admit any $T$-invariant K\\\"ahler structure. Moreover this action is GKM (see \\cite{GKM}), and its index $\\k0$, as well as the Chern number $\\mathsf{c}_1^3[n\\mathcal{K}]$, can be computed from its GKM graph (see \\cite{GT}, in particular Example 5.2 and Figure 1, as well as the discussion on page 27 in \\cite{GoSa}). It can be checked that in this case $\\mathsf{c}_1=4 \\tau_1 + 2 \\tau_2$, where $\\tau_i\\in H^2(\\mathsf{M};{\\mathbb{Z}})$ is the image under $r_H$ of\nthe canonical class $\\tau_i^{T}\\in H_T^2(\\mathsf{M};{\\mathbb{Z}})$ introduced in \\cite{GT}, for $i=1,2$. Since $H^2(\\mathsf{M};{\\mathbb{Z}})={\\mathbb{Z}}\\langle \\tau_1,\\tau_2 \\rangle$, we have $\\k0=2$. Moreover $\\mathsf{c}_1^3[n\\mathcal{K}]=64$ and the Hilbert polynomial is $\\Hi_{n\\mathcal{K}}(z)=\\frac{1}{3}\\big[4z^2+8z+3\\big](z+1)$.\n\\end{itemize}\n\\begin{rmk}\\label{not same}\nNotice that the flag variety in (1) and the non-K\\\"ahler example in (4) have the same index, the same Betti numbers (hence the same $N_j$'s), but different value of $\\mathsf{c}_1^3[\\mathsf{M}]$ and different Hilbert polynomial. \n\\end{rmk}\n\\end{exm}\nIf $\\k0=1$ then from Proposition \\ref{k0=n-2} it follows that for $N_0\\neq 0$ and $\\mathsf{c}_1^3[\\mathsf{M}]\\neq 0$ we have \n\\begin{equation}\\label{k0=1,2 2}\n\\mathsf{c}_1^3[\\mathsf{M}]=\\frac{48 N_0}{1-4a}\\quad\\mbox{and}\\quad \\Hi(z)=\\frac{N_0}{1-4a}\\big[4z^2+4z+1-4a\\big](2z+1)\n\\end{equation}\nwhere $a\\neq \\frac{1}{4}$. \nThus the roots of $\\Hi(z)\/(2z+1)$ are real exactly if $\\mathsf{c}_1^3[\\mathsf{M}]\\geq 48\\,N_0\\;\\;$ or $\\;\\;\\mathsf{c}_1^3[\\mathsf{M}]<0$.\nMoreover they cluster around the ``foci\" $0$ and $-2$ exactly if $\\left\\|\\mathsf{c}_1^3[\\mathsf{M}]\\right\\|\\to +\\infty$.\n\n\\begin{exm}\\label{examples 6-1}\nIn the following we give examples of manifolds of dimension $6$ with $\\k0=1$, together with their associated Hilbert polynomials.\n\\begin{itemize}\n\\item[(1)] ${\\mathbb{C}} P^1\\times {\\mathbb{C}} P^2=:\\mathcal{C}$. This is a compact symplectic (indeed K\\\"ahler) manifold which can be endowed with a Hamiltonian $S^1$-action with $6$ fixed points. Moreover $\\mathsf{c}_1^3[\\mathcal{C}]=54$, and the Hilbert polynomial is $\\Hi_{\\mathcal{C}}(z)=\\frac{1}{2}\\big[9z^2+9z+2\\big](2z+1)$.\n\\item[(2)] \\emph{The Fano threefold $V_{22}$} (for details see \\cite{M, T1} or \\cite[Example 6.14]{GoSa}). Similarly to Example \\ref{examples 6} (3), this is a Fano manifold which can be endowed with a Hamiltonian $S^1$-action with exactly $4$ fixed points. The cohomology ring is given by ${\\mathbb{Z}}[x,y]\/\\langle x^2-22y,y^2 \\rangle$ (where $x$ has degree $2$, and $y$ degree $4$), $\\k0=1$ and $\\mathsf{c}_1=x$. Thus $\\mathsf{c}_1^3[V_{22}]=22$, and the Hilbert polynomial is $\\Hi_{V_{22}}(z)=\\frac{1}{6}\\big[11z^2+11z+6\\big](2z+1)$.\n\\end{itemize}\n\\end{exm}\n\n\\subsection{$\\mathbf{\\dim(\\mathsf{M})=8}$}\nWhen $\\dim(\\mathsf{M})=8$, from \\eqref{NiN} and \\eqref{eq cn} we have that\n\\begin{equation}\\label{eq c4 8}\n \\mathsf{c}_4[\\mathsf{M}]= 2\\,N_0+2\\,N_1+N_2\\,,\n\\end{equation}\nand from Theorem \\ref{nostro}\n\\begin{equation}\\label{c1c3 8}\n \\mathsf{c}_1\\mathsf{c}_3[\\mathsf{M}]=44\\,N_0+8\\,N_1-2\\,N_2\\,.\n\\end{equation}\nAs for the remaining Chern numbers, we can use Propositions \\ref{cor n+1} and \\ref{cor n} to prove the following\n\\begin{prop}[$\\mathbf{\\dim(\\mathsf{M})=8},\\;\\k0=4,5$]\\label{dim 8}\nLet $(\\mathsf{M},\\mathsf{J},S^1)$ be an $S^1$-space of dimension $8$, and let\n$\\k0$, $\\Hi(z)$ and the $N_j$'s be defined as before. \n\\begin{itemize}\n\\item[(a)] If $\\k0=5$ then\n\\begin{equation}\\label{k0=5 8}\n \\mathsf{c}_1^4[\\mathsf{M}]=625\\,N_0\\,,\\quad \\mathsf{c}_1^2\\mathsf{c}_2[\\mathsf{M}]=250\\,N_0\\,,\\quad \\mathsf{c}_2^2[\\mathsf{M}]=101\\,N_0-2\\,N_1+N_2\\,,\n\\end{equation}\nand $\\Hi(z)= \\displaystyle\\frac{N_0}{24}\\prod_{j=1}^4(z+j)$\\,.\n\\item[(b)] If $\\k0=4$ then\n\\begin{equation}\\label{k0=4 8}\n \\mathsf{c}_1^4[\\mathsf{M}]=512\\,N_0\\,,\\quad \\mathsf{c}_1^2\\mathsf{c}_2[\\mathsf{M}]=224\\,N_0\\,,\\quad \\mathsf{c}_2^2[\\mathsf{M}]=98\\,N_0-2\\,N_1+N_2\\,,\n\\end{equation}\nand $\\Hi(z)= \\displaystyle\\frac{N_0}{12}(z+2)\\prod_{j=1}^3(z+j)$.\n\\end{itemize} \nMoreover, if $(\\mathsf{M},\\omega)$ is a connected symplectic manifold and the $S^1$-action is Hamiltonian then\n\\begin{itemize}\n\\item[(a')] if $\\k0=5$ we have\n\\begin{equation}\\label{k0=5 8 1}\n \\mathsf{c}_1^4[\\mathsf{M}]=625\\,,\\quad \\mathsf{c}_1^2\\mathsf{c}_2[\\mathsf{M}]=250\\,,\\quad \\mathsf{c}_2^2[\\mathsf{M}]=101-2\\,b_2(\\mathsf{M})+b_4(\\mathsf{M});\n\\end{equation}\n\\item[(b')] if $\\k0=4$ we have\n\\begin{equation}\\label{k0=5 8 2}\n \\mathsf{c}_1^4[\\mathsf{M}]=512\\,,\\quad \\mathsf{c}_1^2\\mathsf{c}_2[\\mathsf{M}]=224\\,,\\quad \\mathsf{c}_2^2[\\mathsf{M}]=98-2\\,b_2(\\mathsf{M})+b_4(\\mathsf{M}).\n\\end{equation}\n\\end{itemize}\nIf $(\\mathsf{M},\\omega)$ is a connected symplectic manifold and the $S^1$-action is non-Hamiltonian then for all $\\k0\\geq 4$ \n\\begin{equation}\\label{non-ham 8}\n \\mathsf{c}_1^4[\\mathsf{M}]= \\mathsf{c}_1^2\\mathsf{c}_2[\\mathsf{M}]=0\\quad\\mbox{and}\\quad \\mathsf{c}_2^2[\\mathsf{M}]=-2N_1+N_2\n\\end{equation}\n\\end{prop}\n\\begin{proof}\nThe only claims in \\eqref{k0=5 8} and \\eqref{k0=4 8} which do not follow directly from Propositions \\ref{cor n+1} and \\ref{cor n} are the expressions of $ \\mathsf{c}_2^2[\\mathsf{M}]$ in terms of the $N_j$'s. In order to obtain them, it is sufficient to use the expression of the Todd genus given in Corollary \\ref{todd genus comp},\nwhich for $n=4$ gives\n\\begin{equation}\\label{todd 4}\n \\frac{-\\mathsf{c}_1^4+4\\mathsf{c}_1^2\\mathsf{c}_2+3\\mathsf{c}_2^2+\\mathsf{c}_1\\mathsf{c}_3-\\mathsf{c}_4}{720}[\\mathsf{M}]=N_0\\,.\n\\end{equation}\nBy combining \\eqref{todd 4} with \\eqref{c1 n+1}, \\eqref{c1c2}, \\eqref{c1 n} and \\eqref{c1c22} we obtain the desired claims.\nIn the symplectic case, all the claims follow from Lemma \\ref{N0 1}, \\eqref{bi=Ni}, Corollary \\ref{bound on k0} and \\eqref{todd 4}.\n\n\\end{proof}\nWhen $\\k0=3$ or $\\k0=2$, from Proposition \\ref{k0=n-1} and \\ref{k0=n-2} we can see that the coefficients of the Hilbert polynomial depend on the value of $ \\mathsf{c}_1^4[\\mathsf{M}]$. The following proposition exhibits the\nrelation between $ \\mathsf{c}_1^2\\mathsf{c}_2[\\mathsf{M}]$, $ \\mathsf{c}_2^2[\\mathsf{M}]$ and $ \\mathsf{c}_1^4[\\mathsf{M}]$.\n\\begin{prop}[$\\mathbf{\\dim(\\mathsf{M})=8},\\;\\k0=2,3$]\\label{dim 8 2}\nLet $(\\mathsf{M},\\mathsf{J},S^1)$ be an $S^1$-space of dimension $8$, and let\n$\\k0$, $\\Hi(z)$ and the $N_j$'s be defined as before. Then \n\\begin{itemize}\n\\item[(a)] $\\k0=3$ implies that \n\\begin{equation}\n\\label{k0=3 c1c2} \\mathsf{c}_1^2\\mathsf{c}_2[\\mathsf{M}]=108\\,N_0+\\frac{2}{9} \\mathsf{c}_1^4[\\mathsf{M}]\\,,\n\\end{equation}\nand\n\\begin{equation}\n\\label{k0=3 c22} \\mathsf{c}_2^2[\\mathsf{M}]=82\\,N_0-2\\,N_1+N_2+\\frac{1}{27} \\mathsf{c}_1^4[\\mathsf{M}]\\,.\n\\end{equation} \n\n\\item[(b)] $\\k0=2$ implies that \n\\begin{equation}\n\\label{k0=2 c1c2} \\mathsf{c}_1^2\\mathsf{c}_2[\\mathsf{M}]=96\\,N_0+\\frac{1}{4} \\mathsf{c}_1^4[\\mathsf{M}]\\,,\n\\end{equation}\nand\n\\begin{equation}\n\\label{k0=2 c22} \\mathsf{c}_2^2[\\mathsf{M}]=98\\,N_0-2\\,N_1+N_2\\,.\n\\end{equation} \n\n\\end{itemize}\n\n\\end{prop}\n\n\n\\begin{proof}\n\n(a) In order to prove \\eqref{k0=3 c1c2}, it is sufficient to use Corollary \\ref{relation c122}, and\nequation \\eqref{k0=3 c22} can be obtained by combining \\eqref{todd 4} with \\eqref{eq c4 8}, \\eqref{c1c3 8} and \\eqref{k0=3 c1c2}.\n\n(b) Equation \\eqref{k0=2 c1c2} follows from Corollary \\ref{relation c122 2}, and\n\\eqref{k0=2 c22} can be obtained by combining \\eqref{todd 4} with \\eqref{eq c4 8}, \\eqref{c1c3 8} and \\eqref{k0=2 c1c2}.\n\\end{proof}\nWe conclude this section with the following corollary:\n\\begin{corollary}\\label{c228h}\nLet $(\\mathsf{M},\\omega)$ be a compact, connected symplectic manifold of dimension $8$ that can be endowed with a Hamiltonian circle action with\nisolated fixed points. If the minimal Chern number is \\emph{even}, then \n$$\n\\mathsf{c}_2^2[\\mathsf{M}]+2\\,b_2(\\mathsf{M})=98+b_4(\\mathsf{M})\\,.\n$$\n\\end{corollary}\n\\begin{proof}\nIf $(\\mathsf{M},\\omega)$ can be endowed with a Hamiltonian circle action with isolated fixed points, then by Corollary \\ref{minimal chern ham} the minimal Chern number coincides with the index, and it can be only $1,2,3,4$ or $5$. Since it is even, the claim follows from \n\\eqref{k0=5 8 2}, \\eqref{k0=2 c22} and \\eqref{bi=Ni}.\n\\end{proof}\n\n\\begin{rmk}\\label{pos 8}\nIt is easy to check that all the necessary congruences among the Chern numbers for $n=4$ are satisfied; in particular \n$(-\\mathsf{c}_1^4+4\\mathsf{c}_1^2\\mathsf{c}_2+\\mathsf{c}_1\\mathsf{c}_3+3\\mathsf{c}_2^2-\\mathsf{c}_4)[\\mathsf{M}]$ must be a non-negative multiple of $N_0$. \nIf $\\k0\\geq 4$ then $(2\\mathsf{c}_1^4+\\mathsf{c}_1^2\\mathsf{c}_2)[\\mathsf{M}]$ must be a non-negative multiple of $12$. However, in general,\nwe cannot conclude such non-negativity results.\n\\end{rmk}\n\n\\begin{rmk}\\label{comparison}\nIt can be checked that \\eqref{k0=2 c1c2} is equivalent to equation (7.22) in \\cite{GoSa}; here $\\mathsf{M}$ is an $8$-dimensional\ncompact symplectic manifold,\nwith a Hamiltonian $S^1$-action and exactly $5$ fixed points.\nEquation (7.22) in \\cite{GoSa} is obtained by applying some results of Hattori (see \\cite{Ha}, and Corollary 7.7, Theorem 7.11 in \\cite{GoSa}) which, however, only hold \nwhenever $(\\mathsf{M},\\mathsf{J})$ possesses a fine line bundle. Moreover, the derivation of (7.22) from such results is rather complicated, as it can be seen\nfrom the proof of \\cite[Theorem 7.11]{GoSa}.\nHere we do not need to assume the existence of\na fine line bundle, and \\eqref{k0=2 c1c2} is an immediate consequence of Corollary \\ref{relation c122 2}.\n\\end{rmk}\n\nWhen $\\k0=1$ we do not obtain any restrictions on the Chern numbers (see Corollary \\ref{cor equations chern numbers}, as well as\nthe discussion on the case $\\k0=n-3$ at the end of Section \\ref{sec: values k0}). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Sec:intro} \n\nRecently, new applications have been developed to study microscopically the reactions between superfluid nuclei \\cite{Has16,Mag16}. Using the Time-Dependent Hartree-Fock-Bogoliubov (TDHFB) theory with a Gogny interaction, the reaction $^{20}$O+$^{20}$O is simulated in Ref. \\cite{Has16}. It is shown in this reaction where both fragments are superfluid that the fusion barrier depends on the initial relative gauge angle. An amplitude of $\\Delta B$=0.4 MeV is found between the maximum and minimum heights of the barrier. This difference is due to the pairing interaction between the two fragments that is either attractive or repulsive depending on the relative phase. This effect of the superfluidity is not taken into account in actual fusion model \\cite{Bac14}.\n\nFor the heavier system $^{90}$Zr+$^{90}$Zr, Magierski et al. with the FaNDF0 functional without spin-orbit interaction find a very large amplitude of $\\Delta B$=30 MeV. With the same type of calculation, for the reaction $^{44}$Ca+$^{44}$Ca, a value of $\\Delta B$=2.3 MeV is found \\cite{Sek17}. This effect is also seen on $^{120}$Sn+$^{120}$Sn \\cite{Bul17} and on asymmetric reactions $^{86}$Zr+$^{126}$Sn \\cite{Sek17b}.\n\nNevertheless, those calculations assume a semi-classical treatment of the collective variables. Indeed, the gauge angles should not be treated as a parameter of the reaction. A more elaborate method is to restore the initial symmetry in both fragments using a projection technique. A first attempt to restore the symmetry in TDHFB has been achieved recently with simplifying assumptions \\cite{Sca17} to study the Josephson effect, but this method can not be directly used to determine the fusion barrier. \n\nA simpler method to restore the symmetry is proposed in \\cite{Sca17_proc,Reg17}. It assumes an initial uniform distribution of relative gauge angles. Then, from this distribution, an ensemble of independent TDHFB trajectory is performed leading to a final distribution of the observable of interest. In a toy model, comparisons to the exact solution show that the first and second moments of the semi-classical TDHFB distributions are accurate with respect to the exact distributions. Hence, it is expected that the TDHFB may reproduce the standard deviation of the barrier distributions. However, it has to be kept in mind that the TDHFB method cannot reproduce the tunneling effect that would increase the fluctuations of the barrier distribution. More complex methods could solve the problem with a simultaneous description of the tunneling effect and the superfluidity.\nFor example, the Density-constraint-TDHFB method (that remains to be developed) based on the Density-constraint-Time-Dependent Hartree-Fock theory \\cite{Uma06} with the consideration of the pairing correlations. In the absence of more complex theory, one can still consider that the fluctuations of the barrier due to the pairing gauge angle will be convoluted to the fluctuations of the barrier due to the tunneling effect.\n\nAccording to the former TDHFB studies, it can be conjectured the following rule for fusion reactions: In reactions where both fragments are superfluid, the second order fluctuations of the fusion barrier distribution is enhanced compared to similar reactions where at least one of the fragments is not superfluid.\nThe goal of the present work is to search for the evidence of this effect with a systematic study of the fusion experimental data. \n\nSystematic studies of fusion cross section \\cite{Siw04,Wan07,Wan17}, usually use a fitting procedure to determine the main parameters of the reaction which are the barrier height, the fusion radius and the width of the barrier. \nThis method has a drawback that the final result depends on the choice of the model parametrization.. A new method is proposed and tested in order to determine those three parameters directly from the barrier distribution without assuming a parametrization of the cross sections. \n\nThe paper is organized as follows. The local regression method is tested to reduce the uncertainties on the barrier distribution in Section \\ref{Sec:Local_Regression_method}. Then, a benchmark is performed between several methods to determine the fluctuations of the barrier in Section \\ref{Sec:Bench}. A systematic analysis of the fluctuations of the barrier is done in Section \\ref{Sec:syst}. Finally, the summary is given in Section \\ref{sec:summ}\n\n\n\\section{ Local regression method }\n\\label{Sec:Local_Regression_method}\n\nThe fusion barrier distribution is defined as,\n\t\\begin{align}\n\t\tD(B) = \\left. \\frac1{\\pi R_B^2} \\frac{ d^2[ E \\sigma_{\\rm fus}(E) ] }{dE^2} \\right|_{E=B} ,\n\t\\end{align}\nwith $R_B$ the position of the barrier that is deduced from the normalisation of the barrier distribution. The second derivative is usually computed with the three-point difference formula, \n\t\\begin{align}\n\t\t \\left. \\frac{d^2(E\\sigma_{\\rm fus}(E))}{dE^2} \\right|_{E=E_2} \\simeq \\frac{E_1 \\sigma (E_1) - 2 E_2 \\sigma (E_2) + E_3 \\sigma (E_3)}{ (\\Delta E)^2 } , \\label{eq:3points}\n\t\\end{align}\nwith $E_1=E_2-\\Delta E$ and $E_3=E_2+\\Delta E$.\nThe limitation of this method is the presence of large uncertainties due to the calculation of the second derivative. These uncertainty $\\Delta D$ can be estimated at the point $E$ by\n$ \\Delta D = \\Delta\\sigma(E) \\sqrt{6} E\\sigma(E)\/(\\Delta E)^2$ \\cite{Tim98}.\nIn practice, to diminish the uncertainties, the value of $\\Delta E$ is increased. \nThis produces a smoothing of the barrier distribution. Then, structures in the barrier distribution\nsmaller than $\\Delta E$ will not be visible. It is also necessary in experiments to have a\nfixed $\\delta E$ step when the center of mass energy varies.\n\n\nThen $\\Delta E$ will be a multiple of the $\\delta E$ value.\nIn practice, with this method, a part of the information contained in the experimental data is lost because the second derivative at the point $E_2$\nis computed from the information of only three points while there can be other experimental points at the \nvicinity of $E_2$ that can bring information on the second derivative.\n\n\tFrom this statement, a new technique to calculate the second derivative using the local regression method is proposed here. The idea is to fit the experimental data around the point at energy $E$ with a polynomial function. The fitting procedure is done with a weight function, \n\t\\begin{align}\n \t\t W(E') = \\left\\{\n \t \\begin{array}{cc}\n \t 0 & \\quad |E'-E| > L \\cr\n \t ( 1 - (|E'-E|\/L)^3 )^3 & \\quad |E'-E| \\leqslant L\n \t \\end{array} \n \t \\right. ,\n \t\\end{align}\n %\n with $L$ an adjustable parameter which controls how wide is the window around a point $E$. The parameters $a_i$ of the polynomial function,\n %\n\t \\begin{align}\n\t\t f_E(x) = \\sum_{i=0}^{N} a_i x^i , \n\t \\end{align}\nare then adjusted to reproduce the experimental value of $\\sigma(E)$. Then, by making this fitting procedure for each window centered on varying energy $E$, the local regression function $F(E)=f_E(E)$ is obtained. If it is assumed that the cross section varies smoothly in the windows around the energy E, the function $F(E)$ is expected to be closer to the real $\\sigma(E) $ function than the experimental data that contains a statistical uncertainty.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{lin_ccful_cross.pdf}\n\\end{center}\n\\caption{ Simulated fusion cross section obtained with the {\\tt CCFULL} program, in linear scale (a) and logarithmic(b). \nThe original data are shown with green lines, the blue dots represent the data with a noise and the result of the local regression method $F(E)$ is shown by the red dashed lines. } \n\\label{fig:lin_ccful_cross}\n\\end{figure}\n\n\n\nTo test, this method, a fusion cross section is simulated with the program {\\tt CCFULL} \\cite{Hag99}. \nThe reaction $^{40}$Ca+$^{96}$Zr is computed with a nucleus-nucleus Woods-Saxon potential with a parameter set\n$V_0$=87.00 MeV, $r_0$=1.13 fm and $a$=0.7 fm.\n The 3$^-$ collective excitation at energy $E_3$ = 1.89 MeV\nof the $^{96}$Zr are taken into account up to three phonons with a deformation parameter $\\beta_3$ = 0.305 \nand the 3$^-$ at energy $E_3$=3.7 MeV of the $^{40}$Ca are taken into account up to three phonons with a deformation \nparameter $\\beta_3$=0.43. \nOn this data, a random error is added with an amplitude of 5\\% and 2\\%. Note that in order to describe the reaction $^{40}$Ca+$^{96}$Zr, \nit is necessary to take into account the transfer channel \\cite{Ste07,Sca15,Esb16}. Nevertheless, the goal of this calculation is not to realistically describe the fusion barrier distribution of this system but to test the method in the case of a complex barrier which has clear structure effects.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{test_barrier_ord1.pdf}\n\\end{center}\n\\caption{ Barrier distribution computed from the original {\\tt CCFULL} results with the local regression method (green solid line) and with the three-point formula (blacked dotted line).\n The results obtained from the data with noise are shown using the local regression method (red line with colored band) and the three-point formula (blue points with error bars) . The second derivative is computed with the parameter $\\Delta E=$2 MeV.\n An artificial noise of 5\\% is applied on (a) and a noise of 2\\% on (b).} \n\\label{fig:test_barrier}\n\\end{figure}\n\n\nThe local regression method with a polynomial function at first order and a parameter $L$=2 MeV \nis then applied to this data and compared to the original cross section of Fig. \\ref{fig:lin_ccful_cross}. \nThe function obtained is found to be closer to the original cross section than the simulated experimental points.\n\nFrom this function, the second derivative is computed with the three-point formula eq. \\eqref{eq:3points}.\nNote that it is still needed to use a large $\\Delta E$ to avoid the overfitting problem.\nTo estimate the uncertainties, a Monte-Carlo technique is used. \nA set of points $\\{\\sigma_i\\}$ is created, where each point is modified with a random variable $ \\sigma_i \\rightarrow \\sigma_i + \\zeta_i $ with $ \\langle \\zeta_i \\rangle = 0$ and $ \\langle \\zeta_i^2 \\rangle = \\delta_i$, with $\\delta_i$ the uncertainty on the experimental point (here the artificial error). All the $\\zeta_i$ are independent. From this sample, the barrier distribution $D(B)$ is determined. This operation is repeated $N_{rand}$ times with other random selection.\nAfter $N_{rand}$ samples, the value of $D(B)$ is computed as the average value and the uncertainty as the standard deviation for each point.\nIn this calculation, the value $N_{\\rm rand}$=100 is chosen. The result with this method is shown on\nFig. \\ref{fig:test_barrier} with two artificial noises of 5\\% and 2\\%. One can see that the local regression \nmethod is more precise than the direct three-point formula. The error bars are smaller and the average\ncurve is closer to the exact solution.\n\n\n\n\nAlso, in Fig. \\ref{fig:test_barrier} (a), on the region between 100 MeV and 105 MeV, the results of the three-point formula do not bring any information on the barrier. While with the local regression, one can see a barrier at a position close to the real one. The position and the amplitude get closer to the real one when the percentage of error is reduced (See Fig. \\ref{fig:test_barrier} (b)). Then this method, by reducing the uncertainties allows more fine analysis of the structure of the barrier from experimental cross section data (see for example \\cite{Das98,Mon17}).\n\nIn Fig. \\ref{fig:bar_comp_exp}, this method is tested on the real experimental data \\cite{Tim98} of the reaction $^{40}$Ca+$^{96}$Zr. One can see that the three-points formula induces large uncertainties while the local regression method reduces those uncertainties. Another advantage of this method is to provide a continuous function which can be integrated.\n\n\t\t\t\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{bar_comp_exp.pdf}\n\\end{center}\n\\caption{ Barrier distribution for the reaction $^{40}$Ca+$^{96}$Zr computed from the experimental cross section \\cite{Tim98} with the three-point formula (blue points with error bars) and from the local regression ( red curve and shaded area). The value of $L=\\Delta E$= 1.77 MeV is used. } \n\\label{fig:bar_comp_exp}\n\\end{figure}\t\t\t\t\n\t\t\t\t\n\n\n\n\\section{Determination of the barrier parameters }\n\nIn order to describe the fusion barrier, three parameters are defined, the centroid barrier,\n\\begin{align}\n\tB_0 \t &= \\frac{m^{B}_1}{m^{B}_0}, \\label{eq:comp_B} \n\\end{align}\nthe fusion radius, defined in order to normalize the barrier distribution,\n\\begin{align}\n\tR_B &= \\sqrt{ \\frac{m^{B}_0}{ \\pi } }, \\label{eq:comp_R_B}\n\\end{align}\nand the barrier width,\n\\begin{align}\n\t\\sigma_B \t&= \\sqrt{ \\frac{m^{B}_2}{m^{B}_0} - \\left( \\frac{m^{B}_1}{m^{B}_0} \\right)^2}. \\label{eq:definition_sigmaB}\n\\end{align}\nThese three parameters are computed from the moment of the barrier distribution, \n\\begin{align}\n m^{B}_n = \\int_0^{E_M} B^n \\left. \\frac{d^2}{dE^2}\\left( \\frac{}{} E \\sigma(E) \\right) \\right|_{E=B} dB. \\label{eq:moment}\n\\end{align}\n$E_M$ is the maximum barrier energy. This formula assumes that above the barrier $E_M$ the barrier distribution is zero.\n\n\n\n\n\\label{Sec:Bench}\n\\subsection{Calculation from the barrier distribution}\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{sigma_fct_delta_E.pdf}\n\\end{center}\n\\caption{ (a) Barrier distribution computed by the local regression method for the simulated data\n with parameter $\\Delta E$ = $L$ = 1 MeV (solid and dotted lines) and 4 MeV (triangle and crosses markers) \n computed from the {\\tt CCFULL} calculation with (red dotted line and triangles) and without the collective \n excitations (blue solid line and crosses). \n(b) Fluctuations of the barrier distribution from the {\\tt CCFULL} calculation\n with (red triangles) and without the collective excitations (blue crosses) as a function of the three-point derivative parameter $\\Delta E$. A comparison is made with the integration method (eq. \\eqref{eq:sigm_int}) with (red dotted line) and without collective excitations (blue solid line) as a function of $L$. } \n\\label{fig:sigma_fct_delta_E}\n\\end{figure}\n\nIn order to determine the fluctuations of the barrier, the standard deviation of the barrier (eq. \\eqref{eq:definition_sigmaB})\nis computed, with the integration made only with the points that have a positive value of $D(B)$.\n\n\n\n\nThe difficulty of this method is that the result depends on the parameter $\\Delta E$ used to compute the barrier. \nTo show this phenomenon, the effect of the parameter $\\Delta E$ on the barrier distribution is shown in Fig. \\ref{fig:sigma_fct_delta_E}a.\nTwo test cases are shown, the first one is the same cross section as Sec \\ref{Sec:Local_Regression_method} computed with the \ncollective 3$^-$ excitations that create structures on the barrier distribution and a calculation without any collective excitation. \nThe second barrier is almost Gaussian and has small fluctuations. When the value of the $\\Delta E$ parameter increases,\nthe barrier distribution is spread, then the value of $\\sigma_B$ increase. \n\nThe obtained value of $\\sigma_B$ as a function of $\\Delta E$ is shown in Fig. \\ref{fig:sigma_fct_delta_E}b. The value needed is the asymptotic value when $\\Delta E$ tends to zero, which is difficult to attend in practice.\nIt is then not possible to determine the correct value of $\\sigma_B$ without being dependent on the parameter $\\Delta E$.\nNote that in practice, it is also difficult to determine the maximum energy $E_M$.\n\n\n\\subsection{Integral method}\n\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{meth_integr.pdf}\n\\end{center}\n\\caption{$^{40}$Ca+$^{96}$Zr {\\tt CCFULL} fusion cross section multiplied by the energy (red solid line). The function $g(E)$ is shown with a dashed black line. The shaded area represents the integral of eq. \\eqref{eq:sigm_int}.} \n\\label{fig:meth_integr}\n\\end{figure}\n\n\n\n\nOne can avoid the calculation of the second derivative and then avoid the problem of convolution found in the previous section by using partial integration on eq. \\eqref{eq:moment},\n\\begin{align}\nm^{B}_0 &= \\left. \\frac{d}{dE}\\left( \\frac{}{} E \\sigma(E) \\right) \\right|_{E=E_M}, \\\\\nm^{B}_1 &= E_M( m^{B}_0 - \\sigma(E_M)), \\\\\nm^{B}_2 &= E_M^2 ( m^{B}_0 - 2 \\sigma(E_M)) + 2 \\int_0^{E_M} E \\sigma(E) dE.\n\\end{align}\nFrom which simple expressions of the main parameters of the barrier are deduced, \n\\begin{align}\nR_B^2 &= \\frac1{\\pi} \\left. \\frac{d}{dE}\\left( \\frac{}{} E \\sigma(E) \\right) \\right|_{E=E_M}, \\\\\nB_0 \t\t &= E_M \\left( 1 - \\frac{ \\sigma(E_M) }{ \\pi R_B^2} \\right), \\\\\n\\sigma_B^2 &= \\frac{2}{ \\pi R_B^2 } \\int_0^{E_M} \\left( E \\sigma(E) - g(E) \\right) dE, \\label{eq:sigm_int}\n\\end{align}\n with\n\\begin{align} \n \t\t g(E) = \\left\\{\n \t \\begin{array}{cc}\n \t 0 & \\quad E \\leqslant B_0 \\cr\n \t \\pi R_B^2 ( E -B_0 ) & \\quad E>B_0 \\cr\n \t \\end{array} \n \t \\right. .\n\\end{align}\n\n\n\n\n\nThis method requires computing the derivative of the fusion cross section at the energy $E_M$ and one integral. The integral is computed from the local regression function $F(E)$. In practice, the function $g(E)$ is first adjusted to the experimental curve (see Fig. \\ref{fig:meth_integr}) around the point $E_M$, and then the integral of Eq. \\eqref{eq:sigm_int} is computed from the local regression function $F(E)$. Note that this method is close to the one of Ref. \\cite{Das04} to compute the centroid of the barrier distribution $B_0$.\n\n\n\n\n\nUsing this method on the {\\tt CCFULL} cross section, the values of the barrier fluctuations are $\\sigma_B$ = 4.18 MeV and $\\sigma_B$ = 1.03 MeV respectively with and without excitations. Those values are very stable with the $L$ parameter as shown in Fig. \\ref{fig:sigma_fct_delta_E}b. As one can expect from the Fig. \\ref{fig:meth_integr}, the area between the two curves is very dependent on the slope of the g(E) function. \nThis method is then limited to the experimental data where the slope above the barrier can be well determined. Note that the fitting method should also be very dependent on the slope above the barrier, but, it will not be explicit in the fitting procedure. In case of data without a clear slope above the barrier, the fitting procedure will extrapolate from the cross section data below the barrier. This extrapolation, if the barrier is more complicated than the fitting function will not be accurate.\n\n\n\n\nSeveral examples of applications of this method are shown in Fig. \\ref{fig:cross}. To determine the uncertainties the same Monte-Carlo method than for the barrier is used. For each of those examples, the linear $g(E)$ function can be adjusted to the experimental data without ambiguity. In this panel of 6 cross sections, the uncertainties on the values of the fluctuations of the barrier vary from 1\\% in the case where the quality of the experimental cross section is very good ($^{40}$Ca+$^{96}$Zr) to 4\\% where the number of points is less important and the uncertainties larger ($^{40}$Ca+$^{90}$Zr).\n\n\n\n\n\n\\subsection{Fitting procedure}\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{comp_fit_Swi.pdf}\n\\end{center}\n\\caption{$^{40}$Ca+$^{96}$Zr {\\tt CCFULL} fusion cross section calculation with (red triangles) and without (blue crosses) collective excitation. The function eq. \\eqref{eq:fct_fit_swi} is adjusted to those cross sections and shown respectively, with a red dotted line and a blue solid line. } \n\\label{fig:comp_fit_Swi}\n\\end{figure}\n\n\\begin{figure*}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{cross.pdf}\n\\end{center}\n\\caption{ Example of the application of the integration method for several reactions. The black dashed line represents the $g(E)$ function and the red cross the experimental data. The experimental data are taken from Refs. \\cite{Tim98,Sca00,Mor00,Mor99,Jia12}. } \n\\label{fig:cross}\n\\end{figure*}\n\n\nAnother method to determine the parameters of the barrier is to fit the experimental data with a parametrization of fusion cross section \\cite{Swi05},\n\\begin{align}\n\t\\sigma_{\\rm fus} = \\pi R_B^2 \\frac{\\sigma_B}{E\\sqrt{2\\pi}} [X\\sqrt{\\pi} (1+ {\\rm erf} X ) + \\exp(-X^2)],\\label{eq:fct_fit_swi}\n\\end{align}\nwith $X=\\frac{E-B_{0}}{\\sqrt{2}\\sigma_B}$.\nThe parametrization of the fusion cross section corresponds to a Gaussian barrier distribution\nwith standard deviation $\\sigma_B$. The parameters of this function are adjusted on the fusion\ncross section obtained with the {\\tt CCFULL} program. In the case with the excitations, the parameters\nare $R_B$=11.47 fm, $B_0$=93.66 MeV and $\\sigma_B$=2.08 MeV. In the case\nwhere the excitations are not taken into account $R_B$=12.85 fm, $B_0$=100.4 MeV\nand $\\sigma_B$=1.18 MeV.\n\nIn the second case, the value of $\\sigma_B$ is very close to the one in Fig. \\ref{fig:sigma_fct_delta_E} in the limit of $\\Delta E$ small. In the case of a single barrier almost gaussian, the two methods give the same result. But with the cross section generated with structure effects, the barrier is no more Gaussian. Then the fit underestimates a lot the barrier fluctuations, $\\sigma_B$=2.08 MeV instead of about 4.5 MeV with the direct calculation. \n\nTo go beyond this approach, the fusion cross section is fitted with a sum of two functions of Eq. \\eqref{eq:fct_fit_swi} which is equivalent to assume that the barrier is composed of a sum of two Gaussians. Then the barrier width is determined by Eq. \\eqref{eq:definition_sigmaB}.\nWith this method, the barrier fluctuations are of 3.44 MeV.\n This result is closer to the correct value, but still underestimate the real fluctuations of the barrier. Note that the interesting method of the Bayesian spectral deconvolution \\cite{Hag16} could improve the present fitting procedure, but seems to be too complex to be used for a systematic analysis.\n\n\n\n\n\n\n\n\\section{Systematic analysis}\n\\label{Sec:syst}\n\n\nThe two methods (fitting procedure with two Gaussians and integral method) have been systematically applied to a large number of experimental data from the database \\cite{nrv}. 115 reactions have been selected on those data for which the slope above the barrier can be reasonably well determined. The main selection has been done on the uncertainties of the results. Only systems for which the uncertainties on the value of $\\sigma_B$ is lower than 0.75 MeV have been analyzed.\nA comparison between the results obtained by both methods is shown in Fig. \\ref{fig:comp_fit_integral}. A good agreement is found between the two methods. For 73\\% of the reactions, the two methods are giving results with a difference of less than 0.5 MeV.\n\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{comp_fit_integral.pdf}\n\\end{center}\n\\caption{ Fluctuations of the barrier $\\sigma_B$ determined with the integral method as a function of the fit method. } \n\\label{fig:comp_fit_integral}\n\\end{figure}\n\n\nIn order to analyze the data, I define the parameter $S$ that reflects the superfluidity of the reaction.\nFor one reaction, this parameter is computed as follow: starting with $S=0$; if the $N_1$ AND $N_2$ are non magic $S$ is changed to 1; then, if the $Z_1$ AND $Z_2$ are non magic $S$ is incremented by 1. $N_1$ and $N_2$ are the neutron numbers of the two nuclei. $Z_1$ and $Z_2$ are the proton numbers. The magic number taken here are $\\{ 8, 20, 28, 50, 82, 126 \\}$.\n\nThe value of $S$ can take three values 0, 1 and 2. If it is assumed that only the non-magic number nuclei are superfluid, then, for systems with $S=0$, no increase of the fluctuations of the barrier is expected. While with $S=1$ or $S=2$ it can be expected that the superfluidity will increase the fluctuations of the barrier and that the effect will be larger with $S=2$ where neutrons and protons of each fragment are supposed to be in the superfluid phase.\n\nIn order to not mix the superfluid effects with the fusion hindrance, only systems with $Z_1Z_2<$1500 are selected. A naive comparison of the different systems with different values of $S$ is shown in Fig. \\ref{fig:sigma_fct_z}. Where the obtained $\\sigma_B$ with the integral method is shown as a function of the parameter $z=\\frac{Z_1 Z_2}{A_1^{1\/3}+A_2^{1\/3}}$. For reactions with $z$ below 80, no effects are seen and all the reactions have small fluctuations of about 2 MeV. For systems with $z>80$ three groups can be identified, those with small $\\sigma_B$ around 2 or 3 MeV, those systems are mainly $S=0$, those with $\\sigma_B$ around 3 or 4 MeV which are mainly $S$=1 and the last group around 5 MeV is mainly composed of systems with $S=2$. \n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{sigma_fct_z.pdf}\n\\end{center}\n\\caption{ Fluctuations of the barrier $\\sigma_B$ determined with the integral method as a function of the $z$ parameter. } \n\\label{fig:sigma_fct_z}\n\\end{figure}\n\n\n\nThen, this first result corresponds to the expected result with the tendency $\\sigma_B^{S=2}>\\sigma_B^{S=1}>\\sigma_B^{S=0}$. Nevertheless, this analysis neglects all the other effects that play a role in the determination of the fluctuation. In particular, the deformation that is also related to the magicity of the initial fragments.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{comp_sigma.pdf}\n\\end{center}\n\\caption{ Fluctuations of the barrier $\\sigma_B$ determined with the integral method as a function of the estimated barrier from Ref. \\cite{Siw04}. } \n\\label{fig:comp_sigma}\n\\end{figure}\n\nIn order to take into account those effects, the estimate $\\sigma_B$ is computed from the model of ref. \\cite{Siw04}. This model takes into account three sources of fluctuations of the barrier, (i) the tunneling effect, (ii) the static deformation, (iii) the vibration. Then, the total width of the barrier is computed as the convolution of these effects for each fragment (1) and (2),\n\\begin{align}\n(\\sigma_B^{\\rm Siw})^2 &= {\\sigma_{\\rm Tunnel}}^2 + \\sigma_{\\rm Static}(1)^2 + \\sigma_{\\rm Static}(2)^2 \\nonumber \\\\\n&+ \\sigma_{\\rm Vib.}(1)^2 + \\sigma_{\\rm Vib.}(2)^2 . \\label{eq:sigma_siw}\n\\end{align} \nThe formula of each of the terms are given in Ref. \\cite{Siw04}. This model is empirical and has several parameters adjusted on the experimental data on a large number of systems. For each reaction, the total width of the barrier is computed only from the input $A_1$, $A_2$, $Z_1$, $Z_2$ and the $\\beta_2$ of each of the fragments. The $\\beta_2$ values are taken from the M\\\"oller table \\cite{Mol95}.\n\n\n\nBecause this last model does not take into account the effect of the superfluidity, it is expected for the systems with S=1 or S=2 that \nthe empirical model will under-estimate the fluctuations of the barrier ($\\sigma_{\\rm exp.} > \\sigma_{\\rm Siw.}$).\nA comparison between the experimental values of the fluctuations of the barrier and the obtained values from the empirical model is made in Fig. \\ref{fig:comp_sigma}. \n\n\n From this comparison, one can observe the following. i) On average, the Siwek-Wilczynska model underestimates the width of the barrier. This is due to the tendency of the fitting procedure used in Ref. \\cite{Siw04} to underestimate the barrier width and to the larger number of reactions studied here.\n ii) The experimental fluctuations of the barrier are in the range of 0 to 6 MeV. There is no system that is compatible with very large fluctuations of the order of 10 MeV. iii) A clear effect of the superfluidity is found in several reactions with $S=$1 or 2 which are found to have a larger barrier width than the expected value from the Siwek-Wilczynska model and from the general trend of systems with $S$=0.\n\n\n\n\n\\begin{table}[h]\n\\caption{ Systems with $S$=1 or 2 where an enhancement of the fluctuations of the barrier more than 1 MeV is found. The values of the $\\sigma$ are given in MeV. The type of the experiment evaporated residue (EvR) or fusion-fission (FF) done is shown in the last column. }\n\\centering \\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\\hline\nReaction & $\\;S\\:$ &$\\sigma_{\\rm Siw.}$ & $\\sigma^{\\rm integ.}_{\\rm exp.}$ & $\\;$ Ref. $\\;$ & exp. \\\\\n \\hline\n$^{40}$Ar+$^{144}$Sm\t& 1 & 2.31 & 4.39 $\\pm$ 0.44 & \\cite{Rei85} & EvR+FF \\\\\n$^{32}$S+$^{138}$Ba \t& 1 & 2.06 & 3.11 $\\pm$ 0.35 & \\cite{Gil95} & EvR+FF \\\\\n$^{40}$Ar+$^{122}$Sn \t& 1 & 1.94 & 3.41 $\\pm$ 0.44 & \\cite{Rei85} & EvR+FF \\\\\n$^{32}$S+$^{120}$Sn \t& 1 & 1.94 & 3.39 $\\pm$ 0.52 & \\cite{Tri01} & EvR \\\\\n$^{58}$Ni+$^{94}$Zr \t& 1 & 2.59 & 4.28 $\\pm$ 0.34 & \\cite{Sca91} & EvR \\\\\n$^{58}$Ni+$^{60}$Ni \t& 1 & 1.87 & 3.94 $\\pm$ 0.12 & \\cite{Ste95b} & EvR \\\\\n$^{19}$F+$^{93}$Nb \t& 1 & 1.44 & 3.23 $\\pm$ 0.25 & \\cite{Pra96} & EvR \\\\\n$^{40}$Ar+$^{154}$Sm \t& 2 & 4.27 & 5.28 $\\pm$ 0.23 & \\cite{Rei85} & EvR+FF \\\\\n$^{40}$Ar+$^{148}$Sm \t& 2 & 3.15 & 4.79 $\\pm$ 0.37 & \\cite{Rei85} & EvR+FF \\\\\n$^{32}$S+$^{110}$Pd \t& 2 & 2.65 & 4.69 $\\pm$ 0.09 & \\cite{Ste95} & EvR \\\\\n$^{40}$Ar+$^{110}$Pd \t& 2 & 2.90 & 4.62 $\\pm$ 0.73 & \\cite{Jah82} & EvR \\\\\n$^{32}$S+$^{96}$Zr \t& 2 & 2.46 & 4.35 $\\pm$ 0.05 & \\cite{Zha10} & EvR \\\\\n$^{32}$S+$^{94}$Zr \t& 2 & 1.79 & 3.34 $\\pm$ 0.08 & \\cite{Jia14} & EvR \\\\\n$^{28}$Si+$^{178}$Hf \t& 2 & 4.11 & 5.22 $\\pm$ 0.18 & \\cite{But02} & EvR+FF \\\\\n$^{28}$Si+$^{92}$Zr \t& 2 & 1.68 & 2.77 $\\pm$ 0.07 & \\cite{New01} & EvR \\\\\n \\hline\\hline\n\\end{tabular}\n\\label{Tab:S2_value_inc}\n\\end{table}\n\n\n \nThe Tab. \\ref{Tab:S2_value_inc} presents systems with S=1 or 2 that have larger fluctuations of the barrier than the estimated value from the model. The table is given here, in order to guide the future microscopic applications of TDHFB or other models that aim to quantitatively reproduce the effect of the superfluidity on the barrier.\n\n\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{comp_sigma_fit.pdf}\n\\end{center}\n\\caption{ Same as Fig. \\ref{fig:comp_sigma} with the $\\sigma_{\\exp}$ determined with the fitting procedure. } \n\\label{fig:comp_sigma_fit_alpha}\n\\end{figure}\n\nIn order to confirm the results of the Fig. \\ref{fig:comp_sigma}, the same analysis is done with the fitting method in Fig. \\ref{fig:comp_sigma_fit_alpha}. The results of the fitting method are expected to be of lower quality, but the method is more tolerant of the quality and quantity of points in the experimental data. Then, this systematic analysis includes 194 reactions. Those results are shown to confirm the enhancement of the fluctuations of the barrier for systems where $S$=1 or 2. Note that, the points with $S$=0 which presents a large width of the barrier are not present in the Fig. \\ref{fig:comp_sigma} because they have too large uncertainties.\n\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[width= \\linewidth]{R_B_fctA13.pdf}\n\\end{center}\n\\caption{ Experimental fusion radius computed as eq. \\eqref{eq:comp_R_B}. The solid line represent the function $R_B=1.30 (A_1^{1\/3}+A_2^{1\/3})$. } \n\\label{fig:R_B_fctA13}\n\\end{figure}\n\n\nTo finish this empirical analysis, the effect of the superfluidity on the fusion radius and on the centroid of the barrier distribution is investigated. \nIn fig. \\ref{fig:comp_sigma_fit_alpha}, the fusion radii of all the selected reactions, including the systems where it is expected an effect of the fusion hindrance ($Z_1Z_2>$1500) are shown. Those last reactions do not follow the general trend $R_B\\simeq 1.3 (A_1^{1\/3}+A_2^{1\/3}) $ and present a small radius in the range $31$ can detect more errors than an $\\left(\\!\\left(n, KM\\right)\\!\\right)_{q}$ quantum code.\n\\end{proposition}\n\\begin{proof}\nIt is clear that any linear combination of detectable errors is detectable. If we choose a basis adapted to the orthogonal decomposition $H=\\mathcal{C} \\oplus \\mathcal{C}^{\\perp}$ with $$\\mathcal{C}=\\mathcal{C}_{1}\\oplus\\mathcal{C}_{2}\\oplus \\cdots \\oplus \\mathcal{C}_{M},$$ then an error $E$ is represented by a matrix of the form \n$$ \n\\left(\\begin{array}{cc}\nA & R \\\\ \nS & T\n\\end{array}\\right),\n$$\nwhere the blocks $A$ and $T$ correspond to the subspaces $\\mathcal{C}$ and $\\mathcal{C}^{\\perp}$ respectively. Since $E$ is detectable, the $MK\\times MK$ matrix $A$ must satisfy $$A = \\lambda_{E,1} 1_K \\oplus \\lambda_{E,2} 1_{K} \\oplus \\cdots \\oplus\n\\lambda_{E,M} 1_{K},$$ where $1_{K}$ denote a $K\\times K$ identity matrix, but $R$, $S$, and $T$ can be arbitrary. Therefore, the dimension of the vector space of detectable errors is given by $q^{2n} - \\left(MK\\right)^{2} + M$.\n\nIn the case of an $\\left(\\!\\left(n, KM\\right)\\!\\right)_{q}$ quantum code, $A$ must satisfy $A=\\lambda_{E} 1_{KM}$, so the vector space of detectable errors has dimension $q^{2n}-\\left(KM\\right)^{2}+1$, which is strictly less than $q^{2n} - \\left(MK\\right)^{2} + M$ when $M>1$.\n\\end{proof}\n\nWe briefly recall the concept of a nice error basis (see \\cite{Klappenecker2002, Klappenecker2003, Knill1996} for further details), so that we can define a suitable notion of weight for the errors. Let ${G}$ be a group of order $q^{2}$ with identity element~1 and $\\mathcal{U}\\!\\left(q\\right)$ be the group of $q\\times q$ unitary matrices. A \\textit{nice error basis}\\\/ on $\\mathbb{C}^{q}$ is a set ${\\cal E}=\\{\\rho(g)\\in {\\cal U}(q) \\,|\\, g\\in {G}\\}$ of unitary matrices such that\n\\begin{tabbing}\ni)\\= (iiiii) \\= \\kill\n\\>(i) \\> $\\rho(1)$ is the identity matrix,\\\\[1ex]\n\\>(ii) \\> $\\trace\\rho(g)=0$ for all $g\\in G\\setminus \\{1\\}$,\\\\[1ex]\n\\>(iii) \\> $\\rho(g)\\rho(h)=\\omega(g,h)\\,\\rho(gh)$ for all $g,h\\in{G}$,\n\\end{tabbing}\nwhere $\\omega(g,h)$ is a nonzero complex number depending on $(g,h)\\in G\\times G$; the function $\\omega\\colon G\\times G\\rightarrow\\mathbb{C}^\\times$ is called the factor system of $\\rho$. We call $G$ the \\textit{index group}\\\/ of the error basis ${\\cal E}$. The nice error basis that we have introduced so far generalizes the Pauli basis to systems with $q\\ge 2$ levels. \n\nWe can obtain a nice error basis $\\mathcal{E}_n$ on $H\\cong \\mathbb{C}^{q^n}$ by tensoring $n$ elements of $\\mathcal{E}$, so $$ \\mathcal{E}_n = \\mathcal{E}^{\\otimes n} = \\{ E_1 \\otimes E_2\\otimes\\cdots \\otimes E_n \\mid E_k \\in \\mathcal{E}, 1\\le k\\le n\\}.$$ The weight of an element in $\\mathcal{E}_n$ are the number of non-identity tensor components. We write $\\wt(E)=d$ to denote that the element $E$ in $\\mathcal{E}_n$ has weight $d$. A hybrid code with parameters $\\left(\\!\\left(n,K\\!:\\!M,d\\right)\\!\\right)_{q}$ has \\emph{minimum distance} $d$ if it can detect all errors of weight less than $d$.\n\n\\begin{example}\n\\label{nonaddex}\nTo construct our nonadditive hybrid code $\\mathcal{C}$ we will combine two known degenerate stabilizer codes. The first code $\\mathcal{C}_{a}$ is the $\\left[\\!\\left[6,1,3\\right]\\!\\right]_{2}$ code constructed by extending the $\\left[\\!\\left[5,1,3\\right]\\!\\right]_{2}$ Hamming code, see \\cite{Calderbank1998}, where the stabilizer is given by $$\\left\\langle XXZIZI, ZXXZII, IZXXZI, ZIZXXI, IIIIIX\\right\\rangle.$$ The second code $\\mathcal{C}_{b}$ is a $\\left[\\!\\left[6,1,3\\right]\\!\\right]_{2}$ code not equivalent to $\\mathcal{C}_{a}$, see \\cite{Shaw2008}. Its stabilizer is given by $$\\left\\langle YIZXXY, ZXIIXZ, IZXXXX, IIIZIZ, ZZZIZI\\right\\rangle.$$\n\nWe can check that the resulting two codes are indeed orthogonal to each other. The resulting code $\\mathcal{C}$ is a $\\left(\\!\\left(6,2\\!:\\!2,1\\right)\\!\\right)_{2}$ nonadditive hybrid code, since there are several errors of weight one such that $P_{b}EP_{a}\\neq0$, for example $E=IIIIXI$. This shows that even though $\\mathcal{C}_{a}$ and $\\mathcal{C}_{b}$ are optimal quantum codes on their own, together they make a hybrid code with an extremely poor minimum distance. Later we will see how to construct hybrid codes with better minimum distances.\n\\end{example}\n\n\\subsection{Genuine Hybrid Codes}\n\nIn general, it is not difficult to construct hybrid codes using quantum stabilizer codes. As Grassl et al. \\cite{Grassl2017} pointed out, there are three simple constructions of hybrid codes that do not offer any real advantage over quantum error-correcting codes:\n\n\\begin{proposition}[{\\cite{Grassl2017}}]\\label{trivcon}\nHybrid codes can be constructed using the following ``trivial\" constructions:\n\\begin{enumerate}\n\\item Given an $\\left(\\!\\left(n,KM,d\\right)\\!\\right)_{q}$ quantum code of composite dimension $KM$, there exisits a hybrid code with parameters $\\left(\\!\\left(n,K\\!:\\!M,d\\right)\\!\\right)_{q}$.\n\\item Given an $\\left[\\!\\left[n,k\\!:\\!m,d\\right]\\!\\right]_{q}$ hybrid code with $k>0$, there exists a hybrid code with parameters $\\left[\\!\\left[n,k-1\\!:\\!m+1,d\\right]\\!\\right]_{q}$.\n\\item Given an $\\left[\\!\\left[n_{1},k_{1},d\\right]\\!\\right]_{q}$ quantum code and an $\\left[n_{2},m_{2},d\\right]_{q}$ classical code, there exists a hybrid code with parameters $\\left[\\!\\left[n_{1}+n_{2},k_{1}\\!:\\!m_{2},d\\right]\\!\\right]_{q}$.\n\\end{enumerate}\n\\end{proposition}\n\nWe say that a hybrid code is \\emph{genuine} if it cannot be constructed using one of the above constructions, following the work of Yu et al. on genuine nonadditive codes \\cite{Yu2015}. We also refer to a hybrid stabilizer code that provides an advantage over quantum stabilizer codes as a genuine hybrid stabilizer code. While all known genuine hybrid codes are in fact hybrid stabilizer codes, the linear programming bounds in Section \\ref{lpb} do not prohibit genuine nonadditive hybrid codes, and may give us some hints as to their parameters.\n\nMultiple genuine hybrid stabilizer codes with small parameters were constructed by Grassl et al. in \\cite{Grassl2017}, all of which have degenerate inner codes. Having degenerate inner codes can allow for a more efficient packing of the inner codes inside the outer code than is possible when using nondegenerate codes, giving a hybrid code with parameters superior to those using the first construction of Proposition \\ref{trivcon}. However, they do not exclude the possibility that there is a genuine hybrid code where all of the inner codes are nondegenerate. Here, we show that for a genuine hybrid code, at least one of its inner codes must be impure. Recall that a quantum code is \\emph{pure} if trace-orthogonal errors map the code to orthogonal subspaces. A code that is not pure is called \\emph{impure}.\n\n\\begin{proposition}\nSuppose $\\mathcal{C}$ is a genuine $\\left(\\!\\left(n,K\\!:\\!M,d\\right)\\!\\right)_{q}$ hybrid code. Then at least one inner code $\\mathcal{C}_{m}$ of the hybrid code $\\mathcal{C}$ is impure.\n\\end{proposition}\n\\begin{proof}\nSeeking a contradiction, suppose that every inner code of the hybrid code $\\mathcal{C}$ is pure. For $m\\in\\left[M\\right]$, let $P_{m}$ denote the orthogonal projector onto the $m$-th inner code of the hybrid code $\\mathcal{C}$. For every nonscalar error operator $E$ of weight less than $d$, we have \n$$ P_{a} E P_{b} = 0,$$\nwhere $a, b\\in\\left[M\\right]$. Let $P=P_{1} + P_{2} + \\cdots + P_{M}$ denote the projector onto the $KM$-dimensional vector space spanned by the inner codes. Then \n$$ PEP=0,$$\nso the image of $P$ is an $\\left(\\!\\left(n,KM,d\\right)\\!\\right)_{q}$ quantum code, contradicting that the hybrid code $\\mathcal{C}$ is genuine. \n\\end{proof}\n\nSince for stabilizer codes the definitions of impure and degenerate codes coincide, genuine hybrid stabilizer codes necessarily require that one of the inner codes is degenerate. Therefore, one of the difficulties in constructing families of genuine codes is finding nontrivial degenerate codes. Unfortunately, there are few known families of impure or degenerate codes, see for example \\cite{Aly2006, Aly2007}, and they typically have minimum distances much lower than optimal quantum codes, suggesting they are not particularly suitable to use in constructing genuine hybrid codes.\n\n\\subsection{Hybrid Stabilizer Codes}\n\nAll of the hybrid codes constructed by Grassl et al. \\cite{Grassl2017} were given using the codeword stabilizer (CWS)\/union stabilizer framework, see \\cite{Cross2009, Grassl2008}, which we will briefly describe here. Starting with a quantum code $\\mathcal{C}_{0}$, we choose a set of $M$ coset representatives $t_{i}$ from the normalizer of $\\mathcal{C}_{0}$ (we will always take $t_{1}$ to be $I$), and then construct the code $$\\mathcal{C}=\\bigcup\\limits_{i\\in\\left[M\\right]}t_{i}\\mathcal{C}_{0}.$$ In the case of hybrid codes, $t_{i}\\mathcal{C}_{0}$ are our inner codes and $\\mathcal{C}$ is our outer code. If both $\\mathcal{C}_{0}$ and $\\mathcal{C}$ are stabilizer codes, we say that $\\mathcal{C}$ is a hybrid stabilizer code.\n\nThe generators that define a hybrid code can be divided into those that generate the quantum stabilizer $\\mathcal{S}_{\\mathcal{Q}}$ which stabilizes the outer code $\\mathcal{C}$ and those that generate the classical stabilizer $\\mathcal{S}_{\\mathcal{C}}$ which together with $\\mathcal{S}_{\\mathcal{Q}}$ stabilizes the inner code $\\mathcal{C}_{0}$ \\cite{Kremsky2008}. The generators that define the $\\left[\\!\\left[7,1\\!:\\!1,3\\right]\\!\\right]_{2}$ hybrid stabilizer code given in \\cite{Grassl2017} are given in (\\ref{gen7}), where the generators of $\\mathcal{S}_{\\mathcal{Q}}$ are given above the dotted line, the generators of $\\mathcal{S}_{\\mathcal{C}}$ are between the dotted and solid line, the normalizer of the inner code $\\mathcal{C}_{0}$ is generated by all elements above the double line, and the normalizer of the outer code is generated by all of the elements. \n\n\\begin{equation}\n\\label{gen7}\n\\left(\\mkern-5mu\n\\begin{tikzpicture}[baseline=-.5ex]\n\\matrix[\n matrix of math nodes,\n column sep=.25ex, row sep=-.25ex\n] (m)\n{\nX & I & I & Z & Y & Y & Z \\\\\nZ & X & I & X & Z & I & X \\\\\nZ & I & X & X & I & Z & X \\\\\nZ & I & Z & Z & X & I & I \\\\\nI & Z & I & Z & I & X & X \\\\\nZ & I & I & I & I & I & X \\\\\nI & I & I & X & Z & Z & X \\\\\nI & I & I & Z & X & X & I \\\\\nI & I & I & I & X & Y & Y \\\\\n};\n\\draw[line width=1pt, line cap=round, dash pattern=on 0pt off 2\\pgflinewidth]\n ([yshift=.2ex] m-5-1.south west) -- ([yshift=.2ex] m-5-7.south east);\n\\draw[line width=.5pt]\n ([yshift=.2ex] m-6-1.south west) -- ([yshift=.2ex] m-6-7.south east);\n\\draw[line width=.5pt]\n ([yshift=.22ex] m-8-1.south west) -- ([yshift=.2ex] m-8-7.south east);\n\\draw[line width=.5pt]\n ( m-8-1.south west) -- ( m-8-7.south east);\n\\end{tikzpicture}\\mkern-5mu\n\\right)\n\\end{equation}\n\nFollowing Kremsky et al. \\cite{Kremsky2008}, we will often only include the stabilizer generators, as they are sufficient to fully define the hybrid code, as shown in the following proposition:\n\n\\begin{proposition}\n\\label{hybgenconstr}\nLet $\\mathcal{C}$ be an $\\left[\\!\\left[n,k\\!:\\!m,d\\right]\\!\\right]_{p}$ hybrid stabilizer code over a finite field of prime order $p$ with quantum stabilizer $\\mathcal{S}_{\\mathcal{Q}}$ and classical stabilizer $\\mathcal{S}_{\\mathcal{C}}=\\left\\langle g_{1}^{\\mathcal{C}}, \\dots, g_{m}^{\\mathcal{C}}\\right\\rangle$. Then the stabilizer code $\\mathcal{C}_{c}$ associated with classical message $c\\in\\mathbb{F}_{p}^{m}$ is given by the stabilizer $$\\left\\langle \\mathcal{S}_{\\mathcal{Q}}, \\omega^{c_{1}}g_{1}^{\\mathcal{C}}, \\dots, \\omega^{c_{m}}g_{m}^{\\mathcal{C}}\\right\\rangle,$$ where $c_{i}$ is the $i$-th entry of $c$ and $\\omega$ is a primitive complex $p$-th root of unity.\n\\end{proposition}\n\\begin{proof}\nThere are $p^{k+m}$ codewords stabilized by $\\mathcal{S}_{\\mathcal{Q}}$. Each of these codewords is an eigenvector of $g_{i}^{\\mathcal{C}}$, which naturally partitions the code into $p$ cosets based on eigenvalues. Repeating this with all of the classical generators, we get $p^{m}$ cosets of codewords each of size $p^{k}$. Since $v$ being an eigenvector of $g_{i}^{\\mathcal{C}}$ with eigenvalue $\\omega^{-1}$ means that it is a $+1$ eigenvector of $\\omega g_{i}^{\\mathcal{C}}$, therefore each coset is the $+1$ eigenspace of a stabilizer of the form $\\left\\langle \\mathcal{S}_{\\mathcal{Q}}, \\omega^{c_{1}}g_{1}^{\\mathcal{C}}, \\dots, \\omega^{c_{m}}g_{m}^{\\mathcal{C}}\\right\\rangle$, where the string $c\\in\\mathbb{F}_{p}^{m}$ can be used to index the stabilizer codes.\n\\end{proof}\n\n\\section{Weight Enumerators and\\\\ Linear Programming Bounds}\n\nWeight enumerators for quantum codes were introduced by Shor and Laflamme \\cite{Shor1997}, and as with their classical counterparts they can be used to give good bounds on code parameters using linear programming, see \\cite{Ashikhmin1999, Ketkar2006}. Grassl et al. \\cite{Grassl2017} gave weight enumerators and linear programming bounds for hybrid stabilizer codes, but these weight enumerators will not work for nonadditive hybrid codes such as the one given in Example \\ref{nonaddex}. In this section, we define weight enumerators for general hybrid codes following the approach of Shor and Laflamme \\cite{Shor1997} and Rains \\cite{Rains1998} and use them to derive linear programming bounds for general hybrid codes.\n\n\\subsection{Weight Enumerators}\n\nFor an $\\left(\\!\\left(n,K\\!:\\!M,d\\right)\\!\\right)_{q}$ hybrid code $\\mathcal{C}$ defined by the projector $P=P_{1}+\\cdots+P_{M}$ and a nice error base $\\mathcal{E}_{n}$ as defined in Section \\ref{seced}, we define the two weight enumerators of the code following Shor and Laflamme \\cite{Shor1997}: $$A\\!\\left(z\\right)=\\sum\\limits_{d=0}^{n}A_{d}z^{d}\\text{ and }B\\!\\left(z\\right)=\\sum\\limits_{d=0}^{n}B_{d}z^{d},$$ where the coefficients are given by $$A_{d}=\\frac{1}{K^{2}M^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(EP\\right)\\Tr\\!\\left(E^{*}P\\right)$$ and $$B_{d}=\\frac{1}{KM}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(EPE^{*}P\\right).$$\n\nWe can also define weight enumerators using the inner code projectors $P_{a}$. Let $$A^{\\left(a,b\\right)}\\!\\left(z\\right)=\\sum\\limits_{d=0}^{n}A_{d}^{\\left(a,b\\right)}z^{d}\\text{ and }B^{\\left(a,b\\right)}\\!\\left(z\\right)=\\sum\\limits_{d=0}^{n}B_{d}^{\\left(a,b\\right)}z^{d},$$ where $$A_{d}^{\\left(a,b\\right)}=\\frac{1}{K^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(EP_{a}\\right)\\Tr\\!\\left(E^{*}P_{b}\\right)$$ and $$B_{d}^{\\left(a,b\\right)}=\\frac{1}{K}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(EP_{a}E^{*}P_{b}\\right).$$ Note that $A^{\\left(a,a\\right)}\\!\\left(z\\right)$ and $B^{\\left(a,a\\right)}\\!\\left(z\\right)$ are the weight enumerators of the quantum code associated with projector $P_{a}$. We can then write the weight enumerators for the outer code in terms of the weight enumerators for the inner codes:\n\n\\begin{lemma}\nThe weight enumerators of $\\mathcal{C}$ can be written as $$A\\!\\left(z\\right)=\\frac{1}{M^{2}}\\sum\\limits_{a,b=1}^{M}A^{\\left(a,b\\right)}\\!\\left(z\\right)\\text{ and }B\\!\\left(z\\right)=\\frac{1}{M}\\sum\\limits_{a,b=1}^{M}B^{\\left(a,b\\right)}\\!\\left(z\\right).$$\n\\end{lemma}\n\\begin{proof}\nBy linearity of the projector $P$ we have \\begin{align*} A_{d} & = \\frac{1}{K^{2}M^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(EP\\right)\\Tr\\!\\left(E^{*}P\\right) \\\\ & = \\frac{1}{K^{2}M^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\sum\\limits_{a,b=1}^{M}\\Tr\\!\\left(EP_{a}\\right)\\Tr\\!\\left(E^{*}P_{b}\\right) \\\\ & = \\frac{1}{M^{2}}\\sum\\limits_{a,b=1}^{M}A_{d}^{\\left(a,b\\right)}. \\end{align*} We can then rewrite the weight enumerator as \\begin{align*} A\\!\\left(z\\right) & = \\sum\\limits_{d=0}^{n}A_{d}z^{d} \\\\ & = \\frac{1}{M^{2}}\\sum\\limits_{d=0}^{n}\\sum\\limits_{a,b=1}^{M}A_{d}^{\\left(a,b\\right)}z^{d} \\\\ & =\\frac{1}{M^{2}}\\sum\\limits_{a,b=1}^{M}A^{\\left(a,b\\right)}\\!\\left(z\\right). \\end{align*} The result for $B\\!\\left(z\\right)$ follows from the same argument.\n\\end{proof}\n\nWhile the weight enumerator $B\\!\\left(z\\right)$ is the same as the one introduced by the authors in \\cite{Nemec2018}, the weight enumerator $A\\!\\left(z\\right)$ is different. There the $A^{\\left(a,b\\right)}\\!\\left(z\\right)$ weight enumerators with $a\\neq b$ were ignored, causing $A\\!\\left(z\\right)$ and $B\\!\\left(z\\right)$ to not satisfy the MacWilliams identity. The approach presented in this paper is more natural, as it treats both the inner and outer codes as quantum codes. The following result may be found in \\cite{Rains1998, Shor1997}, which we include for completeness:\n\n\\begin{lemma}[{\\cite{Rains1998, Shor1997}}]\n\\label{cauchyschwarz}\nLet $\\mathcal{C}$ be a $\\left(\\!\\left(n,K\\!:\\!M\\right)\\!\\right)_{q}$ hybrid code with weight distributions $A_{d}$ and $B_{d}$. Then for all integers $d$ in the range $0\\leq d\\leq n$ and all $a\\in\\left[M\\right]$ we have\n\\begin{enumerate}\n\\item $0\\leq A_{d}\\leq B_{d}$\n\\item $0\\leq A_{d}^{\\left(a,a\\right)}\\leq B_{d}^{\\left(a,a\\right)}$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nFor every orthogonal projector $\\Pi:\\mathbb{C}^{q^{n}}\\rightarrow\\mathbb{C}^{q^{n}}$ of rank $K$, we have\n\\begin{equation*}\n0\\leq\\frac{1}{K^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(E\\Pi\\right)\\Tr\\!\\left(E^{*}\\Pi\\right)\n\\end{equation*}\nby the non-negativity of the trace inner product. Furthermore, we can write this inequality in the form\n\\begin{align*}\n0 & \\leq \\frac{1}{K^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(E\\Pi\\right)\\Tr\\!\\left(E^{*}\\Pi\\right) \\\\\n& = \\frac{1}{K^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\left\\vert\\Tr\\!\\left(E\\Pi\\right)\\right\\vert^{2} \\\\\n& = \\frac{1}{K^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\left\\vert\\Tr\\!\\left(\\left(\\Pi E\\Pi\\right)\\Pi\\right)\\right\\vert^{2}.\n\\end{align*}\nUsing the Cauchy-Schwarz inequality, we obtain\n\\begin{align*}\n0 & \\leq \\frac{1}{K^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(\\left(\\Pi E\\Pi\\right)\\left(\\Pi E\\Pi\\right)^{*}\\right)\\Tr\\!\\left(\\Pi^{*}\\Pi\\right) \\\\\n& = \\frac{1}{K}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\Tr\\!\\left(E\\Pi E^{*}\\Pi\\right).\n\\end{align*}\nSubstituting $\\Pi=P$ implies (1) and substituting $\\Pi=P_{a}$ implies (2).\n\\end{proof}\n\nThe main utility of weight enumerators for quantum codes is that they allow for a complete characterization of the error-correction capability of the code in terms of the minimum distance of the code. In the following proposition, we prove a similar result for the weight enumerators of hybrid codes.\n\n\\begin{proposition}\n\\label{wtenumnecsuf}\nLet $\\mathcal{C}$ be a $\\left(\\!\\left(n,K\\!:\\!M\\right)\\!\\right)_{q}$ hybrid code with weight distributions $A_{d}$ and $B_{d}$. Then $\\mathcal{C}$ can detect all errors in $\\mathcal{E}_{n}$ of weight $d$ if and only if $A_{d}^{\\left(a,a\\right)}=B_{d}^{\\left(a,a\\right)}$ for all $a\\in\\left[M\\right]$ and $B_{d}^{\\left(a,b\\right)}=0$ for all $a,b\\in\\left[M\\right],a\\neq b$.\n\\end{proposition}\n\\begin{proof}\nRecall that an error is detectable by a code if and only if it satisfies the hybrid Knill-Laflamme conditions in Equation (\\ref{klvec}), and that a projector onto one of the inner codes $\\mathcal{C}_{a}$ may be written as $P_{a}=\\sum_{i=1}^{K}\\ket{c_{i}^{\\left(a\\right)}}\\bra{c_{i}^{\\left(a\\right)}}$, where $\\left\\{\\ket{c_{i}^{\\left(a\\right)}}\\mid i\\in\\left[K\\right]\\right\\}$ is an orthonormal basis for $\\mathcal{C}_{a}$. Suppose that all errors of weight $d$ are detectable by $\\mathcal{C}$. Then \\begin{align*}\nA_{d}^{\\left(a,a\\right)} & = \\frac{1}{K^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}} \\Tr\\!\\left(EP_{a}\\right)\\Tr\\!\\left(E^{*}P_{a}\\right) \\\\\n& = \\frac{1}{K^{2}}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\left\\vert\\sum_{i=1}^{K}\\bra{c_{i}^{\\left(a\\right)}}E\\ket{c_{i}^{\\left(a\\right)}}\\right\\vert^{2} \\\\\n& = \\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\left\\vert\\alpha_{E}^{\\left(a\\right)}\\right\\vert^{2}.\n\\end{align*}\nSimilarly, we have \\begin{align*}\nB_{d}^{\\left(a,a\\right)} & = \\frac{1}{K}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}} \\Tr\\!\\left(EP_{a}E^{*}P_{a}\\right) \\\\\n& = \\frac{1}{K}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\sum\\limits_{i,j=1}^{K}\\left\\vert\\bra{c_{i}^{\\left(a\\right)}}E\\ket{c_{j}^{\\left(a\\right)}}\\right\\vert^{2} \\\\\n& = \\frac{1}{K}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\sum\\limits_{i=1}^{K}\\left\\vert\\bra{c_{i}^{\\left(a\\right)}}E\\ket{c_{i}^{\\left(a\\right)}}\\right\\vert^{2} \\\\\n& = \\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\left\\vert\\alpha_{E}^{\\left(a\\right)}\\right\\vert^{2}.\n\\end{align*}\nTherefore, we have that $A_{d}^{\\left(a,a\\right)}=B_{d}^{\\left(a,a\\right)}$. Additionally, if $a\\neq b$, then by Equation (\\ref{klvec}) we have $\\bra{c_{i}^{\\left(a\\right)}}E\\ket{c_{j}^{\\left(b\\right)}}=0$. Therefore, \\begin{align*}\nB_{d}^{\\left(a,b\\right)} & = \\frac{1}{K}\\sum\\limits_{\\substack{E\\in \\mathcal{E}_n\\\\ \\wt(E)=d}}\\sum\\limits_{i,j=1}^{K}\\left\\vert\\bra{c_{i}^{\\left(a\\right)}}E\\ket{c_{j}^{\\left(b\\right)}}\\right\\vert^{2} \\\\\n& = 0.\n\\end{align*}\n\nConversely, suppose that (a) $A_{d}^{\\left(a,a\\right)}=B_{d}^{\\left(a,a\\right)}$ for all $a\\in\\left[M\\right]$ and (b) $B_{d}^{\\left(a,b\\right)}=0$ for all $a,b\\in\\left[M\\right],a\\neq b$. Condition (a) implies that equality holds for each $E$ in the Cauchy-Schwarz inequality. Therefore, we have that $P_{a}EP_{a}$ and $P_{a}$ must be linearly dependent, so there must be a constant $\\alpha_{E}^{\\left(a\\right)}\\in\\mathbb{C}$ such that $P_{a}EP_{a}=\\alpha_{E}^{\\left(a\\right)}$, or equivalently, $\\bra{c_{i}^{\\left(a\\right)}}E\\ket{c_{j}^{\\left(a\\right)}}=\\alpha_{E}^{\\left(a\\right)}\\delta_{i,j}$, for all errors of weight $d$. Condition (b) implies that $\\bra{c_{i}^{\\left(a\\right)}}E\\ket{c_{j}^{\\left(b\\right)}}=0$ if $a\\neq b$, for all errors of weight $d$. Putting these together, we get the hybrid Knill-Laflamme conditions, so all errors of weight $d$ are detectable.\n\\end{proof}\n\n\\subsection{Linear Programming Bounds}\\label{lpb}\n\nOne of the more useful properties of weight enumerators is that they satisfy the Macwilliams identity \\cite{Shor1997}:\n\\begin{equation}\nB^{\\left(a,b\\right)}\\!\\left(z\\right)=\\frac{K}{q^{n}}\\left(1+\\left(q^{2}-1\\right)z\\right)^{n}A^{\\left(a,b\\right)}\\!\\left(\\frac{1-z}{1+\\left(q^{2}-1\\right)z}\\right).\n\\end{equation}\nThe MacWilliams identities, along with the results from Lemma \\ref{cauchyschwarz} and Proposition \\ref{wtenumnecsuf} and the shadow inequalities for qubit codes \\cite{Rains1999b} allow us to define linear programming bounds on the parameters of general hybrid codes (see \\cite{Ashikhmin1999, Calderbank1998, Rains1998} for linear programming bounds on quantum codes). Let \\begin{equation}K_{j}\\!\\left(r\\right)=\\sum\\limits_{k=0}^{j}\\left(-1\\right)^{k}\\left(q^{2}-1\\right)^{j-k}\\binom{r}{k}\\binom{n-r}{j-k}\\end{equation} denote the $q^{2}$-ary Krawtchouk polynomials.\n\n\\begin{proposition}\nThe parameters of an $\\left(\\!\\left(n,K\\!:\\!M,d\\right)\\!\\right)_{q}$ hybrid code must satisfy the following conditions:\n\\begin{enumerate}\n\\item $A_{j}=\\frac{1}{M^{2}}\\sum\\limits_{a,b=1}^{M}A_{j}^{\\left(a,b\\right)}$\n\\item $B_{j}=\\frac{1}{M}\\sum\\limits_{a,b=1}^{M}B_{j}^{\\left(a,b\\right)}$\n\\item $A_{0}^{\\left(a,b\\right)}=1$\n\\item $B_{0}^{\\left(a,b\\right)}=\\begin{cases} 1 & \\text{ if } a=b \\\\ 0 & \\text{ if } a\\neq b \\end{cases}$\n\\item $A_{j}^{\\left(a,a\\right)}=B_{j}^{\\left(a,a\\right)}$, for all $0\\leq j2$? For small lengths ($n\\leq19$) this family achieves the linear programming bounds for general hybrid codes given in Section \\ref{lpb}, and we suspect that $M=2$ is optimal for all odd $n$.\n\n\\section{Families of Hybrid Codes from\\\\ Stabilizer Pasting}\n\nIn this section, we construct two families of single-error correcting hybrid codes that can encode one or two classical bits. An infinite family of nonadditive quantum codes was constructed by Yu et al. \\cite{Yu2015} by pasting together (see \\cite{Gottesman1996a}) the stabilizers of Gottesman's $\\left[\\!\\!\\!\\:\\left[2^{j},2^{j}-j-2,3\\right]\\!\\!\\!\\:\\right]_{\\!\\!\\:2}$ codes \\cite{Gottesman1996b} with the non-Pauli observables of the $\\left(\\!\\left(9,12,3\\right)\\!\\right)_{2}$ and $\\left(\\!\\left(10,24,3\\right)\\!\\right)_{2}$ nonadditive CWS codes \\cite{Yu2007, Yu2008} which function in the same role as the Pauli stabilizers in stabilizer codes.\n\nBelow we give the generators of the hybrid codes originally given by Grassl et al. \\cite{Grassl2017} that we will use in the construction of our families. The generators for the $\\left[\\!\\left[7,1\\!:\\!1,3\\right]\\!\\right]_{2}$ code was previously given in (\\ref{gen7}), while those for the $\\left[\\!\\left[9,2\\!:\\!2,3\\right]\\!\\right]_{2}$, $\\left[\\!\\left[10,3\\!:\\!2,3\\right]\\!\\right]_{2}$, and $\\left[\\!\\left[11,4\\!:\\!2,3\\right]\\!\\right]_{2}$ hybrid stabilizer codes are (\\ref{gen9}), (\\ref{gen10}), and (\\ref{gen11}) respectively:\n\n\\begin{equation}\n\\label{gen9}\n\\left(\\mkern-5mu\n\\begin{tikzpicture}[baseline=-.65ex]\n\\matrix[\n matrix of math nodes,\n column sep=.25ex, row sep=-.25ex\n] (m)\n{\nX & I & I & Z & Y & Z & X & X & Y \\\\\nZ & X & I & Z & Y & X & Y & I & Z \\\\\nI & Z & X & Z & Z & I & X & I & X \\\\\nI & Z & Z & I & Y & X & X & Y & I \\\\\nZ & Z & I & X & X & I & X & Z & I \\\\\nZ & I & I & I & I & X & I & I & I \\\\\nI & Z & I & I & I & I & X & I & I \\\\\n};\n\\draw[line width=1pt, line cap=round, dash pattern=on 0pt off 2\\pgflinewidth]\n ([yshift=.2ex] m-5-1.south west) -- ([yshift=.2ex] m-5-9.south east);\n\\end{tikzpicture}\\mkern-5mu\n\\right)\n\\end{equation}\n\n\\begin{equation}\n\\label{gen10}\n\\left(\\mkern-5mu\n\\begin{tikzpicture}[baseline=-.65ex]\n\\matrix[\n matrix of math nodes,\n column sep=.25ex, row sep=-.25ex\n] (m)\n{\nX & X & I & Z & I & Z & Y & Z & Y & Z \\\\\nX & I & Y & X & I & X & Z & X & X & Y \\\\\nX & Z & X & Y & Z & Y & Y & I & I & Y \\\\\nI & I & Z & Z & X & X & Y & Y & I & I \\\\\nZ & I & I & I & Z & Z & X & X & I & X \\\\\nZ & I & I & I & I & I & I & I & I & X \\\\\nI & I & Z & Z & I & I & I & I & I & I \\\\\n};\n\\draw[line width=1pt, line cap=round, dash pattern=on 0pt off 2\\pgflinewidth]\n ([yshift=.2ex] m-5-1.south west) -- ([yshift=.2ex] m-5-10.south east);\n\\end{tikzpicture}\\mkern-5mu\n\\right)\n\\end{equation}\n\n\\begin{equation}\n\\label{gen11}\n\\left(\\mkern-5mu\n\\begin{tikzpicture}[baseline=-.65ex]\n\\matrix[\n matrix of math nodes,\n column sep=.25ex, row sep=-.25ex\n] (m)\n{\nI & Z & X & I & X & Z & I & Z & X & X & X \\\\\nI & Z & Z & X & I & I & Z & X & X & Y & Y \\\\\nZ & I & I & Z & X & X & Z & X & X & X & I \\\\\nX & X & I & X & Y & X & I & Y & Y & Y & X \\\\\nY & Y & I & X & X & Y & Y & Z & Y & I & Y \\\\\nZ & I & I & I & I & I & I & I & X & I & I \\\\\nI & Z & I & I & I & I & I & I & X & I & I \\\\\n};\n\\draw[line width=1pt, line cap=round, dash pattern=on 0pt off 2\\pgflinewidth]\n ([yshift=.2ex] m-5-1.south west) -- ([yshift=.2ex] m-5-11.south east);\n\\end{tikzpicture}\\mkern-5mu\n\\right)\n\\end{equation}\n\nNote that in each case, the generators above the dotted line define a pure $\\left[\\!\\left[n,n-5,2\\right]\\!\\right]_{2}$ quantum code.\n\nThe next theorem describes families of hybrid quantum codes. Notice that $2^{2m+5} \\equiv 2^5 \\pmod{3}$, so the length $n$ given in the theorem is well-defined.\n\n\\begin{theorem} \nLet $m$ be a nonnegative integer and $n$ a positive integer given by\n$$n=\\frac{2^{2m+5}-32}{3}+a,$$\nwhere the parameter $a$ is a small positive integer that is specified below. Then there exists \n\\begin{compactenum}[(a)]\n\\item an $\\left[\\!\\left[n,n-2m-6\\!:\\!1,3\\right]\\!\\right]_{2}$ hybrid code for $a=7$ and \n\\item an $\\left[\\!\\left[n,n-2m-7\\!:\\!2,3\\right]\\!\\right]_{2}$ hybrid code for $a=9,10,11$.\n\\end{compactenum}\n\\end{theorem}\n\\begin{proof}\nRoughly speaking, we construct our code by partitioning the first $\\left(2^{2m+5}-32\\right)\\!\/3$ qubits into disjoints sets, forming a perfect code on each partition, and use one of the four small hybrid codes on the remaining last $a$ qubits. These codes are then ``glued\" to one another by using stabilizer pasting. Other than a small number of degenerate errors introduced by the small hybrid code that must be handled individually, each single-qubit Pauli error has a unique syndrome, allowing for the correction of any single-qubit error.\n\nWe will now describe the code construction in more detail. We \ntake the $n=\\left(2^{2m+5}-32\\right)\\!\/3+a$ qubits and partition them into disjoint sets \n$$U_{m}\\cup U_{m-1}\\cup\\cdots\\cup U_{1}\\cup V_{a},$$ \nwhere $\\left\\lvert U_{k}\\right\\rvert=2^{2k+3}$ and $\\left\\lvert V_{a}\\right\\rvert=a$.\nThe set $U_{m}$ contains the first $2^{2m+3}$ qubits, $U_{m-1}$ the next $2^{2m+1}$ qubits, and so forth. The final $a$ qubits are contained in $V_a$. \n\nLet $k$ be an integer in the range $1\\le k\\le m$. On the qubits in the set $U_{k}$, we can construct a stabilizer code of length $2^{2k+3}$ with $2k+5$ stabilizer generators, following Gottesmann~\\cite{Gottesman1996b}. The $2k+5$ stabilizer generators are given as follows. Two of these generators are the tensor product of only Pauli-$X$ and $Z$ operators, which we call $X_{U_{k}}$ and $Z_{U_{k}}$ respectively. We define the other $2k+3$ stabilizers by\n\\begin{equation*}\n\\mathcal{S}_{j}^{k}=X^{h_{j}}Z^{h_{j-1}+h_{1}+h_{2k+3}},\n\\end{equation*}\nfor $j\\in\\left[2k+3\\right]$. Here we let $h_{j}$ be the $j$-th row of the $\\left(2k+3\\right)\\times2^{2k+3}$ matrix $H_{k}$, whose $i$-th column is the binary representation of $i$, $h_{0}$ is defined to be the all-zero vector, and $X^{h_{j}}=X^{h_{j,0}}X^{h_{j,1}}\\dots X^{h_{j,2^{2k+3}-1}}$, with $Z^{h_{j}}$ defined similarly.\n\nFor the set $V_{a}$, let $H_{j}^{\\mathcal{Q}}$ be the generators of the quantum stabilizer $\\mathcal{S}_{\\mathcal{Q}}$ of the length $a$ hybrid code defined by the generators in (\\ref{gen7}), (\\ref{gen9}), (\\ref{gen10}), or (\\ref{gen11}), and $H_{j}^{\\mathcal{C}}$ be the generators of the classical stabilizer $\\mathcal{S}_{\\mathcal{C}}$ (since the length 7 hybrid code only has one generator in $\\mathcal{S}_{\\mathcal{C}}$, we can remove $H_{2}^{\\mathcal{C}}$). The stabilizer can be pasted together as shown in (\\ref{stabpastgen}), where suitable identity operators should be inserted in the blank spaces:\n\n\\begin{equation}\n\\label{stabpastgen}\n\\left(\\mkern-5mu\n\\begin{tikzpicture}[baseline=-.65ex]\n\\matrix[\n matrix of math nodes,\n column sep=.25ex, row sep=-.25ex\n] (m)\n{\nX_{U_{m}} & & & & & \\\\\nZ_{U_{m}} & & & & & \\\\\nS_{1}^{m} & X_{U_{m-1}} & & & & \\\\\nS_{2}^{m} & Z_{U_{m-1}} & & & & \\\\\n\\vdots & \\vdots & \\ddots & & & \\\\\nS_{2m-6}^{m} & S_{2m-8}^{m-1} & \\cdots & & & \\\\\nS_{2m-5}^{m} & S_{2m-7}^{m-1} & \\cdots & X_{U_{2}} & & \\\\\nS_{2m-4}^{m} & S_{2m-6}^{m-1} & \\cdots & Z_{U_{2}} & & \\\\\nS_{2m-3}^{m} & S_{2m-5}^{m-1} & \\cdots & S_{1}^{2} & X_{U_{1}} & \\\\\nS_{2m-2}^{m} & S_{2m-4}^{m-1} & \\cdots & S_{2}^{2} & Z_{U_{1}} & \\\\\nS_{2m-1}^{m} & S_{2m-3}^{m-1} & \\cdots & S_{3}^{2} & S_{1}^{1} & H_{1}^{\\mathcal{Q}} \\\\\nS_{2m}^{m} & S_{2m-2}^{m-1} & \\cdots & S_{4}^{2} & S_{2}^{1} & H_{2}^{\\mathcal{Q}} \\\\\nS_{2m+1}^{m} & S_{2m-1}^{m-1} & \\cdots & S_{5}^{2} & S_{3}^{1} & H_{3}^{\\mathcal{Q}} \\\\\nS_{2m+2}^{m} & S_{2m}^{m-1} & \\cdots & S_{6}^{2} & S_{4}^{1} & H_{4}^{\\mathcal{Q}} \\\\\nS_{2m+3}^{m} & S_{2m+1}^{m-1} & \\cdots & S_{7}^{2} & S_{5}^{1} & H_{5}^{\\mathcal{Q}} \\\\\n & & & & & H_{1}^{\\mathcal{C}} \\\\\n & & & & & H_{2}^{\\mathcal{C}} \\\\\n};\n\\draw[line width=1pt, line cap=round, dash pattern=on 0pt off 2\\pgflinewidth]\n ([yshift=.2ex] m-15-1.south west) -- ([yshift=.2ex] m-15-6.south east);\n\\end{tikzpicture}\\mkern-5mu\n\\right)\n\\end{equation}\n\nSuppose that we have an single-qubit Pauli error on the block $U_{m}$. Since the code is pure, the syndrome of each error will be distinct and such that the Pauli-$X$, $Y$, and $Z$ sydromes will start with $01$, $11$, and $10$ respectively. However, this leaves all of the syndromes starting with $00$ unused, so Pauli-$X$, $Y$, and $Z$ errors on the block $U_{m-1}$ will have distinct syndromes starting with $0001$, $0011$, and $0010$ respectively. Continuing on, any single-qubit Pauli error occurring on the block $U_{k}$ will have a distinct syndrome starting with $2\\left(m-k\\right)$ $0$s.\n\nAll of the syndromes of errors occurring on the block $V_{a}$ start with $2m$ $0$s. Here our code is not pure, but it is almost pure, with the only degenerate errors being the weight 2 errors in $\\mathcal{S}_{\\mathcal{C}}$. For example, when $V_{a}$ has 11 qubits, it will have three weight 1 degenerate errors: $Z_{1}$ (a Pauli-$Z$ on the first qubit of the block), $Z_{2}$, and $X_{9}$, each with the syndrome $00011$ (preceeded by $2m$ zeros). If we measure this syndrome, we apply the operator $ZZIIIIIIXII$ to the state, which maps the original codeword to itself up to a global phase. Note, however, that while this global phase is the same for codewords of the same inner code for a given error, it may differ for codewords from different inner codes. In fact, this is exactly what prevents the outer code from being a distance 3 quantum code rather than a distance 3 hybrid code. The argument for when $V_{a}$ has 7, 9, and 10 qubits is similar.\n\nSince we know how to correct any single-qubit Pauli error based on its syndrome, each of the codes must have minimum distance 3.\n\\end{proof}\n\nHere we show that these hybrid codes are better than optimal quantum stabilizer codes using a result of Yu et al. \\cite{Yu2013}.\n\n\\begin{proposition}\nLet $m$ be a nonnegative integer and $n$ a positive integer given by\n$$n=\\frac{2^{2m+5}-32}{3}+a,$$\nwhere $a\\in\\left\\{7,9,10,11\\right\\}$. Then there does not exist an $\\left[\\!\\left[n,n-2m-5,3\\right]\\!\\right]_{2}$ stabilizer code.\n\\end{proposition}\n\\begin{proof}\nWhen $a=7,9,10$, we have \n\\begin{align*}\nn & = \\frac{2^{2m+5}-32}{3}+a \\\\\n& = \\frac{2^{2m+5}-8}{3}+\\left(a-8\\right) \\\\\n& = \\frac{8}{3}\\left(4^{m+1}-1\\right)+\\left(a-8\\right).\n\\end{align*}\nBy a result of Yu et al. \\cite[Theorem 1]{Yu2013}, distance 3 stabilizer codes with lengths of the form $$\\frac{8}{3}\\left(4^{k}-1\\right)+b,$$ where $b\\in\\left\\{-1,1,2\\right\\}$, can exist if and only if $$2m+5\\geq \\left\\lceil\\log_{2}\\!\\left(3n+1\\right)\\right\\rceil+1.$$ But in this case we have\n\\begin{align*}\n\\left\\lceil\\log_{2}\\!\\left(3n+1\\right)\\right\\rceil+1 & = \\left\\lceil\\log_{2}\\!\\left(2^{2m+5}+3a-31\\right)\\right\\rceil+1\\\\\n & > \\left\\lceil\\log_{2}\\!\\left(2^{2m+5}-2^{2m+4}\\right)\\right\\rceil+1 \\\\\n & = 2m+5,\n\\end{align*}\nso when $a=7,9,10$, there is no distance 3 stabilizer code of length $n$.\n\nWhen $a=11$, a different case of \\cite[Theorem 1]{Yu2013} applies, so distance 3 stabilizer codes with lengths of this form can exist if and only if $$2m+5\\geq \\left\\lceil\\log_{2}\\!\\left(3n+1\\right)\\right\\rceil.$$ However, this gives us\n\\begin{align*}\n\\left\\lceil\\log_{2}\\!\\left(3n+1\\right)\\right\\rceil & = \\left\\lceil\\log_{2}\\!\\left(2^{2m+5}+2\\right)\\right\\rceil\\\\\n & > \\left\\lceil\\log_{2}\\!\\left(2^{2m+5}\\right)\\right\\rceil \\\\\n & = 2m+5,\n\\end{align*}\nso when $a=11$, there is likewise no distance 3 stabilizer code of length $n$.\n\\end{proof}\n\nAs with our family of error-detecting hybrid codes, it would be interesting to know whether any of these codes meet the linear programming bounds from Section \\ref{lpb}. Since none of the hybrid codes we started with meet these bounds, it is doubtful that any of the hybrid codes constructed from stabilizer pasting would also meet this bound, leaving it unclear whether or not these codes are optimal among all hybrid codes.\n\n\\section{Conclusion and Discussion}\nIn this paper we have proven some general results about hybrid codes, showing that they can always detect more errors than comparable quantum codes. Furthermore we proved the necessity of impurity in the construction of genuine hybrid codes. Additionally, we generalized weight enumerators for hybrid stabilizer codes to nonadditive hybrid codes, allowing us to develop linear programming bounds for nonadditive hybrid codes. Finally, we have constructed several infinite families of hybrid stabilizer codes that provide an advantage over optimal stabilizer codes.\n\nBoth of our families of hybrid codes were inspired by the construction of nonadditive quantum codes. In hindsight this is not very surprising, as the examples of hybrid codes with small parameters given by Grassl et al. \\cite{Grassl2017} were constructed using a CWS\/union stabilizer construction. Most interesting is that all known good nonadditive codes with small parameters have a hybrid code with similar parameters. This would suggest that looking at larger nonadditive codes such as the quantum Goethals-Preparata code \\cite{Grassl2008} or generalized concatenated quantum codes \\cite{Grassl2009} might be helpful in constructing larger hybrid codes. Alternatively, it may be possible to use the existence of hybrid codes to point to where nonadditive codes may be found. For instance the existence of an $\\left[\\!\\left[11,4\\!:\\!2,3\\right]\\!\\right]_{2}$ hybrid code suggests a nonadditive code with similar parameters might exist.\n\nAs previously suggested by Grassl et al. \\cite{Grassl2017}, one possible way to construct new hybrid codes with good parameters is to start with degenerate quantum codes with good parameters. Another possible approach to constructing new hybrid stabilizer codes is to find codes such that there are few small weight errors that are in the normalizer but not in the stabilizer, and then add those small weight errors to the generating set of the stabilizer to get a degenerate code. Here, the original code becomes the outer code of the hybrid code and the degenerate code the inner code.\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\label{sec:intro}\n\nThe top partner holds a special place in many extensions of the\nStandard Model~\\cite{bsm_review}. As the fermion with the\nlargest coupling to the Higgs field, the top gives the largest\nquadratic correction to the Higgs mass term. To have a natural and\nuntuned cancellation of this term, we would expect the supersymmetric\ntop squark --- the stop (${\\tilde t}$) --- to be close in mass to the top itself.\nAdditionally, in generic supersymmetric flavor models the large top Yukawa\ndrives the mixing of left-- and right--handed stops and pushes the\nlightest stop mass eigenstate to be the lightest squark. Experimentally, however,\nno evidence of a relatively light stop has been obtained in collider searches.\nA combination of \nATLAS~\\cite{atlas_stops}\nand\nCMS~\\cite{cms_stops}\nresults at 7 and 8 TeV excludes stop pair production decaying to final\nstates containing an invisible, stable supersymmetric particle ({\\em e.g.}, the lightest neutralino, $\\tilde{\\chi}^0$) for stop\nmasses in the range of $100- 750$~GeV, assuming a massless invisible\ndecay product.\\bigskip\n\nNevertheless, in the two-dimensional plane of ${\\tilde t}$ and $\\nz{}$ masses, there remains a\nnotable window in the experimental exclusion regions: neither experiment has ruled out the\npossibility that stop pair production events may be buried top in production when the mass difference\n$\\mst -(\\mne{} + m_t)$ becomes small.\nThere is a simple explanation for this lack of sensitivity to stop \nproduction near the ``degeneracy line:'' when the mass\nsplitting is small, the invisible particles ($\\nz{}$) carry little momentum,\nso the final state from stop pair production closely mimics \nthat of top pairs in the Standard Model. In principle, measurable differences in the\nmissing transverse energy ($\\slashed{E}_T$) distributions would for fully hadronic top decays would appear if\nstop events are also present, a feature that might allow discovery or exclusion of degenerate stops~\\cite{degenerate_stops_had}.\nIn practice, however, such searches face challenging jet combinatorics and require\nprecise understanding of the background $\\slashed{E}_T$. In the di- or semi-leptonic channels, kinematic variables built from the decay\nproducts of the top are nearly identical for $t\\bar{t}$\nand $\\st{} \\st{}^*$ events, assuming the stop decays to either (a) an on-shell\nor off-shell top and an invisible $\\nz{}$, or (b) a bottom quark and a\nchargino, where the latter decays into a $\\nz{}$ and a $W^{(\\ast)}$ boson. Analyzing differences in the top production angles or top\ndecay products have been suggested~\\cite{more_light_stops} to\nsearch for stop pairs contaminating the top sample, but the possible\nimprovement is small and can be washed out by necessary trigger and\nselection criteria.\\bigskip\n\n\n\nIn this study we\nexplore an alternative approach for distinguishing top and stop pair production that avoids these difficulties. Specifically, \nwe show how correlations between tagging jets can be used to search\nfor stop pairs in the top pair sample at the\nLHC~\\cite{kaoru} independent of the stop decays. In particular, we consider the difference in the\nazimuthal angles $\\Delta\\phi$ of forward jets produced in association\nwith the top or stop pair in vector boson fusion (VBF)\nevents.\\footnote{Here, the fusing vector bosons are primarily gluons,\n justifying the term ``VBF.''} These jets arise from initial state\nradiation. The information in their $\\Delta \\phi$ distribution can be\nused regardless of decay channels, as long as we can manage to extract a\nsignal-rich sample. As was originally demonstrated in the context of\nHiggs\nphysics~\\cite{delta_phi,higgs_spin},\nthe difference in azimuthal angle between the two forward jets $\\Delta\n\\phi$ from weak--boson--fusion events inherit information about the\nhelicities of the weak bosons involved in the production. From the\nunderlying argument it is obvious that this technique can be generalized to gluon\nfusion~\\cite{higgs_spin,delta_phi_gg}. The helicities that can participate in a\ngiven process are set by the Lorentz structure of the production\nmatrix element, and so for pair production the distribution of\n$\\Delta\\phi$ is sensitive to properties of pair-produced particles such as\nspin and CP assignment.\\bigskip\n\nFor the pair production process of interest here, the resulting differential cross section has the form\n\\begin{equation}\n\\label{eq:delphidist}\n\\frac{d\\sigma}{d\\Delta\\phi} = \nA_0 + A_1 \\cos \\Delta\\phi+A_2 \\cos (2\\Delta\\phi)\\ \\ \\ ,\n\\end{equation}\nwhere the expansion coefficients $A_k$ encode the interplay of the underlying pair production\namplitude and the helicity of the fusing gluons. As shown in our earlier work~\\cite{matt_michael}, \nthe sign of $A_2$ is set by the spins of the produced particles: $A_2>0$ for scalars and $A_2<0$ \nfor fermions. In general, this sensitivity could provide a powerful technique for\ndiagnosing the spin of any new particles that may be discovered at the LHC~\\cite{matt_michael}. \nThis is also the case for top pair production close to threshold, while in the\nrelativistic limit the sum of the two azimuthal angles is the more\nsensitive observable~\\cite{kaoru}. \nIn the present context, we show how one may exploit the same effect to identify or exclude the presence\nof stop pairs in the region of parameter space near the degeneracy line. Moreover,\nwe describe how the $\\cos (2\\Delta\\phi)$ correlation between initial state radiation jets\ncan be reliably described in event simulations that take into account parton showering and realistic\ndetector jet identification and show that the correlation is not washed \nout through azimuthal decorrelation \\cite{daCosta:2011ni,Khachatryan:2011zj} To our knowledge, this study represents\nthe first such demonstration, indicating that study of azimuthal tagging jet correlations may be a realistic\ntool in other contexts as well~\\cite{matt_michael,kaoru}. \\bigskip\n\nBefore determining if the degenerate stop production could be hiding in top pair production at the LHC, one should ask whether the measured cross section for top\npair production allows for such a scenario. This rate has been measured\nnumerous\ntimes~\\cite{atlas_top_0,cms_top_0,atlas_top_1,cms_top_1,tevatron_top}\nand agrees with theoretical predictions~\\cite{Czakon:2013goa} within\nuncertainties. In Table~\\ref{tab:xsection}, we show the measured top\npair cross sections at the Tevatron and the LHC, along with the\ntheoretical predictions and the supersymmetric stop pair production\ncross sections for light stop masses of 175 and 200~GeV. At first glance,\nthe measured cross section would appear to rule out the addition\nof a stop with mass near that of the top. However, it is unclear how\nthe top cross section measurements would respond to an admixture of\nstop events, and there may be a degeneracy between the cross section\nand top mass measurements. Short of a detailed analysis of this question that goes beyond the scope of the present study, we cannot rule out the possibility -- however unlikely -- that a 175~GeV stop could be hiding inside the top sample. Moreover, a stop with mass around 200~GeV, still within the degeneracy window, is not in significant tension with the\nexperimental results, given the uncertainties. Consequently, we will consider two benchmark cases, corresponding to \n$(\\mst,\\mne{})=(175,1)$ GeV and $(200, 25)$ GeV, respectively.\n\n\\begin{table}[t]\n\\begin{tabular}{c|c|c|c|c}\n\\hline\n$\\sqrt{s}$~[TeV] & $\\sigma_{t\\bar{t}}~$[pb] & $\\sigma_{t\\bar{t}}~$[pb] & $\\sigma_{\\st{} \\st{}^*}$~[pb] & $\\sigma_{\\st{} \\st{}^*}$~[pb] \\\\ & experiment & theory & $\\mst = 175~{\\ensuremath\\rm GeV}$ & $\\mst = 200~{\\ensuremath\\rm GeV}$ \\\\ \\hline\n1.96 & $7.68\\pm0.20_\\text{stat}\\pm0.36_\\text{sys}$~(CDF+D\\O\\ \\cite{tevatron_top}) \n & $7.164{^{+0.110}_{-0.200}}_\\text{scale}{^{+0.169}_{-0.122}}_\\text{pdf}$ & 0.587 & 0.252 \\\\ \\hline\n7 & $\\begin{array}{c} 177\\pm3_\\text{stat} \\pm {^8_7}_\\text{sys}\\pm7_\\text{lumi}~\\text{(ATLAS)} \\\\ \\pm3_\\text{stat} \\pm {^8_77}_\\text{sys}\\pm7_\\text{lumi}~\\text{(CMS)} \\end{array}$\n & $172.0{^{+4.4}_{-5.8}}_\\text{scale}{^{+4.7}_{-4.8}}_\\text{pdf}$ & 24.0 & 11.9 \\\\ \\hline\n8 & $\\begin{array}{c} 238\\pm2_\\text{stat} \\pm 7_\\text{sys}\\pm7_\\text{lumi}\\pm 4_{\\text{beam~$E$}}~\\text{(ATLAS~\\cite{atlas_top_1})} \\\\ 227 \\pm 3_\\text{stat} \\pm 11_\\text{sys}\\pm 10_\\text{lumi}~\\text{(CMS \\cite{cms_top_1})} \\end{array}$\n & $245.8{^{+6.2}_{-8.4}}_\\text{scale}{^{+6.2}_{-6.4}}_\\text{pdf}$ & 34.5 & 17.3 \\\\ \\hline\n14 & -- & $953.6{^{+22.7}_{-33.9}}_\\text{scale}{^{+16.2}_{-17.8}}_\\text{pdf}$ & 135 & 72.1 \\\\ \\hline\n\\end{tabular}\n\\caption{Cross sections for top and stop pair production at the 1.96\n TeV Tevatron and 7, 8, and 14 TeV LHC. The theoretical predictions\n for the $t\\bar{t}$ cross sections are calculated at NNLO+NNLL, for\n $m_t = 173.3$~GeV~\\cite{Czakon:2013goa}. Cross sections for stop\n pair production are calculated at NLO in {\\tt\n Prospino2}~\\cite{prospino}\n with a light $\\st{1}$ and all other supersymmetric particles\n decoupled.}\n\\label{tab:xsection}\n\\end{table}\n\nOur discussion is organized as follows. In Section~\\ref{sec:spin} we explain the physics behind the\n$\\Delta\\phi$ correlations of VBF tagging jets in the specific cases of\ntop and stop pair production. In Section~\\ref{sec:simulation} we then\ndiscuss the simulation of these events including multi-jet merging in\n{\\tt MadGraph5}. While in the default setup the correlations between\nthe tagging jets are not guaranteed to be included we show how they can be accounted\nfor. In the same section we study the tagging jet correlations at\nparton level and show how a dedicated analysis can separate top and\nstop contributions to a mixed event sample. In\nSection~\\ref{sec:searches} we confirm that using realistic cuts and a\nfast detector simulation these results can be reproduced.\n\n\\section{Tagging jet correlations}\n\\label{sec:spin}\n\nWe are interested in top and stop pairs with two associated tagging\njets, produced primarily via initial state radiation, or equivalently,\nthrough VBF\ndiagrams~\\cite{tagging}. Eventually, to separate\nVBF production from all other sources of jets we will\nemploy strict selection cuts, primarily requiring the jets to be\nforward. A representative Feynman diagram is shown in\nFigure~\\ref{fig:feynman}, defining our notation for the different\nmomenta. The full gauge-invariant matrix element will be the sum of\nmany diagrams, but the cuts will emphasize this topology's contribution\nto the amplitude. In our simulations, we will\ninclude all initial parton states, though in practice gluons dominate for\nthe parameter range of interest.\nIt is most convenient to write the relevant kinematics\nin the three frames shown in\nFigure~\\ref{fig:kinematics}~\\cite{delta_phi}. The emission of\nthe fusing vector bosons (gluons in our case) from the incoming\npartons are described in the Breit frames (frames~I and II), defined\nby the gluon momenta being purely space--like and in the\n$z$-direction:\n\\begin{alignat}{5}\nq_1^\\mu & = k_1^\\mu - k_3^\\mu = (0,0,0,Q_1), \\notag \\\\\nq_2^\\mu & = k_2^\\mu - k_4^\\mu = (0,0,0,-Q_2) \\; .\n\\end{alignat}\nThe top\/stop pair production frame shown as frame~X in\nFigure~\\ref{fig:kinematics} is defined as the frame in which\n$q_1^\\mu+q_2^\\mu = (\\sqrt{\\hat{s}},\\vec{0})$, where $\\hat{s} \\equiv\n(p_1+p_2)^2$ is the invariant mass of the top or stop\npair.\\bigskip\n\n\\begin{figure}[b!]\n\\includegraphics[width=0.23\\textwidth]{.\/feyn_VBFtop.pdf}\n\\caption{A representative Feynman diagram for the VBF process $pp\\to\n t\\bar{t}+j j$ with two tagging jets. Similar diagrams exist for stop\n pair production. The initial and final state partons can be quarks,\n anti-quarks, or gluons. The different channels contributing to the\n hard $gg \\to t\\bar{t}$ scattering are denoted by a solid dot.}\n\\label{fig:feynman}\n\\end{figure}\n\n\\begin{figure}[t]\n \\includegraphics[width=0.6\\textwidth]{.\/kinematics.pdf}\n\\caption{Kinematics for VBF events, showing the two Breit frames~I and\n II and the production frame~X~\\cite{delta_phi}.}\n\\label{fig:kinematics}\n\\end{figure}\n\nWe now focus on the dependence of the differential cross section on\nthe azimuthal angles $\\phi_1$ and $\\phi_2$. As long as the tagging\njets with the momenta $k_3$ and $k_4$ are forward, the $z$-axis shared by frames~I, II,\nand X is nearly collinear with the experimental beam axis. As a first\nstep we can approximate the observed azimuthal angles in the\nlaboratory frame by the angles in the plane orthogonal to the top or\nstop momenta~\\cite{higgs_spin}. The matrix element for the full\nVBF event takes the form\n\\begin{equation}\n{\\cal M} = \\sum_{h_1,h_2} \n{\\cal M}_\\text{I}^\\mu(h_1,\\phi_1,\\theta_1)\n{\\cal M}_\\text{II}^\\nu(h_2,\\phi_2,\\theta_2)\n{\\cal M}_\\text{X}^{\\mu\\nu}(h_1,h_2,\\Theta) \\; ,\n\\end{equation}\nwhere $h_1,h_2=-1,0,+1$ are the helicities of the gluons $q_1$ and\n$q_2$, measured relative to the $z$-axis, so $h_1 = +1$ is positive\nangular momentum for $q_1$, but $h_2 = -1$ is positive angular\nmomentum for $q_2$. We suppress the dependence on the color factors. A\nboost is required to take each matrix element from its individual\nframe to a common center--of--mass frame. All these boosts will be in\n$z$-direction and will not induce additional dependence on the\nazimuthal angles $\\phi_i$. Therefore, $\\phi_1$ and $\\phi_2$ enter\nonly as phases of the Breit matrix elements,\n\\begin{alignat}{5}\n{\\cal M}_\\text{I}(h_1,\\phi_1,\\theta_1) &= \n{\\cal M}_\\text{I}(h_1,0,\\theta_1) \\; e^{+ih_1\\phi_1}, \\notag \\\\\n{\\cal M}_\\text{II}(h_2,\\phi_2,\\theta_2) &= \n{\\cal M}_\\text{II}(h_2,0,\\theta_2) \\; e^{-ih_2\\phi_2}.\n\\end{alignat}\nWe can rewrite $\\phi_1$ and $\\phi_2$ in terms of their difference\n$\\Delta \\phi \\equiv \\phi_1-\\phi_2$ and their sum $\\phi_+ \\equiv\n\\phi_1+\\phi_2$~\\cite{delta_phi,higgs_spin}. The angle $\\phi_+$ is\nphysically unobservable without reference to the top or stop\nproduction plane, which we will not attempt to reconstruct, and so it\ncan be integrated over. Abbreviating the six-body phase space factors as\n$({\\cal PS})$ and the integration over all other angles as $d\\Omega$,\nthe differential cross section with respect to $\\Delta\\phi$ can be\nwritten as\n\\begin{equation}\n\\frac{d\\sigma}{d\\Delta \\phi} = ({\\cal PS}) \\int d\\Omega\\sum_{h_1^{(')},h_2^{(')}} \ne^{i\\Delta h \\; \\Delta\\phi\/2}\n\\left[{\\cal M}_\\text{I}^\\mu(h_1){\\cal M}_\\text{I}^{\\mu'*}(h_1')\\right]\n\\left[{\\cal M}_\\text{II}^\\nu(h_2){\\cal M}_\\text{II}^{\\nu'*}(h_2')\\right]\n\\left[{\\cal M}_\\text{X}^{\\mu\\nu}(h_1,h_2){\\cal M}_\\text{X}^{\\mu'\\nu'*}(h_1',h_2') \\right] \\; ,\n\\end{equation}\nwith $\\Delta h = h_1-h_1'+h_2-h_2'$. This distribution has to be\ninvariant under the shift $\\Delta \\phi \\to \\Delta \\phi+2\\pi$, which\ntranslates into the condition $\\Delta h = 0, \\pm 2, \\pm 4$. Terms\nwith odd $\\Delta h$ must vanish, and larger values of $\\Delta h$\ncannot be generated for $|h_j| \\le 1$ (allowing for\noff-shell gluons). We then expand the exponential\nwith the helicities in sines and cosines and, assuming CP conservation,\nignore the complex sine contributions. The three allowed helicity\nchanges $\\Delta h$ give rise to the three coefficients of\nEq.~(\\ref{eq:delphidist}),\n\\begin{alignat}{5}\nA_n & = ({\\cal PS}) \\int d\\Omega \\sum_{\\Delta h = \\pm n} \n\\left[{\\cal M}_\\text{I}^\\mu(h_1){\\cal M}_\\text{I}^{\\mu'*}(h_1')\\right]\n\\left[{\\cal M}_\\text{II}^\\nu(h_2){\\cal M}_\\text{II}^{\\nu'*}(h_2')\\right]\n\\left[{\\cal M}_\\text{X}^{\\mu\\nu}(h_1,h_2){\\cal M}_\\text{X}^{\\mu'\\nu'*}(h_1',h_2') \\right] \\; .\n\\label{eq:diffsigma}\n\\end{alignat}\nWe will be most interested in $A_2$, where $\\Delta h= \\pm 4$. This can only\nbe satisfied by the unique configuration $h_1 = h_2 = \\pm 1$ and $h_i'\n= -h_i$.\\bigskip\n\nFrom explicit calculation, the contribution from the matrix\nelements for gluon emission, {\\sl i.e.} \\,${\\cal M}_\\text{I}(h_1)^\\mu{\\cal\n M}_\\text{I}(-h_1)^{\\mu'*}$ and ${\\cal M}_\\text{II}(h_2)^\\nu{\\cal\n M}_\\text{II}(-h_2)^{\\nu'*}$ for $h_i = \\pm 1$, are all\npositive~\\cite{delta_phi}. As a result, the sign of $A_2$\ndepends only on the sign of the pair production interference terms\n${\\cal M}_\\text{X}^{\\mu\\nu}(h,h){\\cal M}_\\text{X}^{\\mu'\\nu'*}(-h,-h)$,\nwith $h = \\pm 1$. That is, the sign of $A_2$ depends on the relative\nsign between the matrix element for pair production where the total\nincoming $z$-component of angular momentum is $+2$, and the matrix\nelement where the incoming $J_z = -2$ .\n\nAn explicit calculation of these interference terms in the case of the\nfusion of abelian gauge bosons shows that, for the production of\nscalars, these interference terms are overall positive, while for\nfermion production, the terms are overall\nnegative~\\cite{matt_michael}. We can now repeat this calculation in\nthe case of QCD-coupled heavy quarks~\\cite{kaoru} or squarks. The\nresults are made more clear by multiplying the matrix elements in\nframe~X by polarization vectors for the virtual gluons $q_1$ and\n$q_2$, treating them as approximately on-shell. Recalling that\npositive helicity for both gluons is defined relative to the $z$-axis,\nrather than relative to the gluon momentum, both sets of polarization\nvectors can be written as $\\epsilon_{1\/2}^\\pm = \n(0,1,\\pm i,0)\/\\sqrt{2}$.\\bigskip\n\nWe begin with the fermionic case. For top pairs, the relevant\nproduction matrix elements times polarization vectors in frame~X are\n\\begin{alignat}{5}\n\\left[{\\cal M}^{\\mu\\nu} _\\text{X}(h,h)\\right]^{s,s} \\epsilon_\\mu(h)\\epsilon_\\nu(h) = & \n- \\; \\; \\; ig_s^2 \\; 2s \\; \n\\left( \\{T^a,T^b\\}+\\beta\\cos\\Theta [T^a,T^b] \\right) \\; \n\\beta \\sqrt{1-\\beta^2} \\; \\frac{\\sin^2\\Theta}{1-\\beta^2\\cos^2\\Theta} \\notag \\\\\n\\left[{\\cal M}^{\\mu\\nu} _\\text{X}(h,h)\\right]^{s,-s} \\epsilon_\\mu(h)\\epsilon_\\nu(h) = & \n- h \\; ig_s^2 \\; 2s \\; \n\\left(\\{T^a,T^b\\}+\\beta\\cos\\Theta [T^a,T^b] \\right) \\; \n\\beta \\qquad \\sin\\Theta \\frac{1- 2s h\\cos\\Theta}{1-\\beta^2\\cos^2\\Theta} \\; . \n\\label{eq:fermionspin}\n\\end{alignat}\nThe angle $\\Theta$ is defined in Figure~\\ref{fig:kinematics}. The\nsuperscripts $s,s$ or $s,-s$ for $s= \\pm 1\/2$ denote the helicities of\nthe top and anti-top, measured relative to each of their momenta. In terms of the total\nproduction energy $\\hat{s}$ the\nvelocity of the top and anti-top $\\beta$ is $\\beta= \\sqrt{1-4m^2\/\\hat{s}}$.\n\nNotably, the matrix elements for production of a $t\\bar{t}$ pair with\nthe same helicity assignments Eq.~\\eqref{eq:fermionspin} do not have\nthe property that ${\\cal M}_X(+1,+1) \\times {\\cal M}_\\text{X}(-1,-1)^*\n< 0$, contrary to our expectations. However, the signs of the $s,-s$\nmatrix elements with opposite helicity are manifestly asymmetric, as ${\\cal\n M}_\\text{X}^{s,-s}(h,h) \\propto h$, so this product is indeed\nnegative. The fact that one term is not clearly negative could be\nconcerning for our argument, but by inspection it is clear that the negative\nterms are strictly larger in magnitude than the positive\ncontributions. It is possible that the $\\beta$ dependence of the\n$A_2$ term could be useful in an experimental analysis. Cuts placed on\nthe top decay products could be used to enhance particular ranges of $\\beta$~\\cite{kaoru},\nenhancing or suppressing the interference effect and providing useful \nside-bands. We will not further investigate this possibility in this paper. \n\\bigskip\n\nTurning to the stop pair production, the relevant matrix elements are\n\\begin{equation}\n{\\cal M}_\\text{X}^{\\mu\\nu}(h,h)\\epsilon_\\mu(h)\\epsilon_\\nu(h) = \nig_s^2 \\; \n\\left(\\{T^a,T^b\\}+\\beta\\cos\\Theta [T^a,T^b] \\right) \\; \n\\frac{\\beta^2\\sin^2\\Theta}{1-\\beta^2\\cos^2\\Theta} \\; .\n\\label{eq:scalarM}\n\\end{equation}\nClearly this does not depend on the gluon helicities $h$, and so the\ninterference terms are positive. This results in a positive $A_2$ term\nfor stop pair production, and thus, the sign of $A_2$ can be used to\ndistinguish the production of scalar stops and fermionic tops. Note\nthat these two calculations only demonstrate that the top and stop\ndistributions will have opposite signs of their $A_2$ components,\nwithout addressing the relative magnitudes. To answer that question,\nwe must turn to Monte Carlo simulation.\n\n\\section{Simulating VBF (S)Tops}\n\\label{sec:simulation}\n\n\\begin{figure}[b!]\n\\includegraphics[width=0.245\\textwidth]{.\/matching_pt_0.pdf}\n\\includegraphics[width=0.245\\textwidth]{.\/matching_pt_1.pdf}\n\\includegraphics[width=0.245\\textwidth]{.\/matching_pt_2.pdf}\n\\includegraphics[width=0.245\\textwidth]{.\/matching_pt_3.pdf}\n\\caption{Normalized $p_T$ distributions of the four leading jets in\n the merged $t\\bar{t}$ samples, with {\\tt xqcut}=20 (black), 40 (red),\n and 60~GeV (blue). We use anti-$k_T$ jets with $\\Delta R = 0.5$,\n and require $p_T> 20$~GeV and $|\\eta_j|<5$.}\n\\label{fig:qcut}\n\\end{figure} \n\nIn order to extract information on the spin of the heavy top or stop\nparticles from tagging jets we need to ensure that our simulation\nkeeps all relevant spin correlations. Naively, this can be guaranteed by\ngenerating events for the hard processes $\\st{}\\st{}^* jj$ and\n$t\\bar{t}jj$~\\cite{skands,matt_michael,kaoru}.\nHowever, the transverse momentum of the tagging jets will often be\nsignificantly below the energy scale of this hard\nprocess. In that region of phase space, for example the transverse momentum \nspectrum of jet radiation is only properly\ndescribed once we include the parton shower or other implementation of Sudakov factors. In standard\nshowering algorithms the\nprobabilistic parton shower is (usually) averaged over the helicities\nof the participating partons. In such simulations, any apparent spin\ncorrelation between the hard process and the tagging jets --or between\nthe tagging jets themselves-- comes only from kinematic\nconstraints~\\cite{skands}, rather than from a combination of kinematics and underlying \ninterference effects. What we need is a merged description\nof the parton shower and the hard matrix element, where the tagging jets are\ngenerated through the matrix element.\\bigskip\n\nTo that end, we consider two benchmark parameter points for stop signals for stop pair\nproduction followed by a decay into a top and a missing energy particle,\n\\begin{alignat}{5}\npp \\to \\st{} \\st{}^* \\to (t \\nz{}) \\; (\\bar{t} \\nz{}) \n\\qquad \\qquad \n(\\mst,\\mne{}) =\n\\begin{cases} (175, 1)~{\\ensuremath\\rm GeV} \\\\ (200,25)~{\\ensuremath\\rm GeV} \\; .\\end{cases}\n\\end{alignat}\nThe invisible particles coming from a\nprompt decay can be a neutralino or a gravitino. As we are not\nclosely investigating the stop and top decay patterns we will refer to the\ngeneric missing energy particle as $\\nz{}$.\n\nFor the background and each signal benchmark we generate events for\nthe pair production of stops and tops at the 14 TeV LHC with up to\nthree extra jets in {\\tt MadGraph5}~\\cite{mg5,mlm}, matching the\njets to {\\tt Pythia6}~\\cite{pythia} and using anti-$k_T$\njets with $R=0.5$~\\cite{fastjet} down to a matching scale {\\tt\n xqcut}=20~GeV. This choice (endorsed by the {\\tt MadGraph} authors \\cite{madgraphonline})\nensures that the spin correlations in\nthe tagging jets are kept, provided the two tagging jets are chosen\nfrom the three leading jets that do not originate from top decay. We will compare these results to\nunmatched hard $t\\bar{t}jj$ and $\\st{} \\st{}^*jj$\nevents~\\cite{kaoru}. In this section we do not keep\ntrack of the top and stop decays. The two tagging jets are the two\nhardest jets which fulfill all $p_T$ and $\\Delta \\eta$\nrequirements.\\bigskip\n\nIn order to ensure that all final state jets in {\\tt MadGraph5} are\ngenerated by the matrix element and hence include all spin and angular\ncorrelations, we can move the matching scale to values below the\ntransverse momenta for all potential tagging jets, {\\tt xqcut}$<\np_{T,j}$.\\footnote{We have confirmed that for events with {\\tt\n xqcut}$> p_{T,j}$ the correlations between the tagging jets in\n {\\tt MadGraph} are indeed lost.} While this choice will hugely\ndecrease the efficiency of the event generation, because a very large\nfraction of events will be vetoed to generate the Sudakov suppression,\nit will ensure that our events include all the necessary\ninformation. Because the matching scale is not a physical parameter,\nit can be varied within a reasonable range, where we will see that the\ndefinition of `reasonable' is different for kinematic distributions\nand the total rate.\n\n\\begin{figure}[t]\n\\includegraphics[width=0.245\\textwidth]{.\/pythia_dphi1.pdf}\n\\includegraphics[width=0.245\\textwidth]{.\/pythia_dphi2.pdf}\n\\includegraphics[width=0.245\\textwidth]{.\/pythia_dphi3.pdf}\n\\includegraphics[width=0.245\\textwidth]{.\/pythia_dphi4.pdf}\n\\caption{Normalized $\\Delta\\phi$ distributions for the two\n highest-$p_T$ forward jets at parton level, requiring $\\Delta \\eta_{jj} >\n 1,2,3,4$. We show top pairs (blue) and stop pairs (red) matched to three jets,\n as well as the unmatched two-jet samples for tops (cyan) and stops (purple). \n We also show the best fits to the functional form $A_0+A_1\\cos\\Delta\\phi+A_2\\cos\n (2\\Delta\\phi)$. For the stop samples, the $(\\mst,\\mne{}) = (175,1)$~GeV scenario is shown\n with a solid line, while $(\\mst,\\mne{}) = (200,25)$~GeV is shown\n with a dotted line.}\n\\label{fig:partondphi}\n\\end{figure}\n\nBefore we study the spin correlation between the tagging jets we test\nif our choice of the matching scale, {\\tt xqcut}=20~GeV,\nleads to stable and consistent results. To this end we show the\n$p_T$ distributions for the first four jets for top pair production in\nFigure~\\ref{fig:qcut}. This distribution directly probes the Sudakov\nsuppression and should therefore be most sensitive to artifacts from\nthe choice of the matching scale. We vary the matching scale from\n20~GeV to 40~GeV and the default value of 60~GeV. We see that the\ndistributions are essentially indistinguishable between the three\nsamples over the entire range of $p_T$, so our choice of scales does \nnot present any problems for the tagging jet distributions.\n\nOn the other hand, the combined cross sections from {\\tt MadGraph}\nshow a wider variation, with $\\sigma_{t\\bar{t}} = 2.9,~ 1.3,$ $0.94$,\nand $0.71$~nb for {\\tt xqcut}=20, 40, 60, and 100~GeV. Given that\nmulti-jet merging is based on a combination of leading order matrix\nelements and a leading logarithmic parton shower, this variation\nreflects the uncertainty of a leading order cross section with four\npowers of $\\alpha_s$. For smaller values of {\\tt xqcut} we include\nmore and more real emission as described by the full matrix element,\nbut only compensated for by approximate virtual corrections in the\nSudakov factor. If we apply an external normalization of the total\nproduction rate, for example to the precision predictions shown in\nTable~\\ref{tab:xsection} we can use a {\\tt MadGraph} event samples\nwith the matching scale of 20~GeV to accurately simulate the\nproduction of top or stop pairs plus jets.\\bigskip\n\nWe can now consider the distribution of forward jets in top or stop events. \nIn this Section, we will focus on confirming\nthe existence and the sign of the $A_2$ terms, as derived from the interference pattern described in \nSection~\\ref{sec:spin}. Moreover, we need to test if our event generation \nindeed captures all relevant physics.\nTo be independent of the details of the top decay,\nwe use Monte Carlo truth to distinguish between associated\njets and those from top decay. For specific top decays it should be\nstraightforward to distinguish between ISR jets and decay jets, as has been shown\nfor direct production of supersymmetric particles~\\cite{susy_isr}, for \nweak--boson--fusion pair production of supersymmetric\nparticles~\\cite{wbf_isr}, and for sgluon pair\nproduction~\\cite{sgluon_isr}, as we will demonstrate shortly. We then place selection criteria on\nour 3-jet matched or 2-jet unmatched samples in order to isolate VBF-type production from all other\ndiagrams that generate two or more jets in association with stops or\ntops. Adapting the criteria used for WBF Higgs\nselection~\\cite{delta_phi,wbf_isr}, we begin by requiring at\nleast two parton--level jets in the merged sample with\n\\begin{equation}\np_{T,j} > 20~{\\ensuremath\\rm GeV}, \\qquad \\qquad \\qquad\n|\\eta_j|<5, \\qquad \\qquad \\qquad \n\\Delta \\eta_{jj} > 1, 2, 3, 4 \\; .\n\\label{eq:vbf_cuts}\n\\end{equation}\nThe increasing rapidity separation should emphasize the VBF-induced \nangular correlations between the tagging jets~\\cite{higgs_spin}.\nMore realistic selection criteria will be put in place once we include\na fast detector simulation in Section~\\ref{sec:searches}.\\bigskip\n\n\\begin{table}[t]\n\\begin{footnotesize}\n\\begin{tabular}{ll|c|c|c|c|c|c|c|c} \\hline\n& & \\multicolumn{2}{c|}{$|\\Delta \\eta_{jj}|>1$} & \\multicolumn{2}{c|}{$|\\Delta \\eta_{jj}|>2$} \n & \\multicolumn{2}{c|}{$|\\Delta \\eta_{jj}|>3$} & \\multicolumn{2}{c}{$|\\Delta \\eta_{jj}|>4$} \\\\ \n& & $A_1\/A_0$ & $A_2\/A_0$ & $A_1\/A_0$ & $A_2\/A_0$ & $A_1\/A_0$ & $A_2\/A_0$ & $A_1\/A_0$ & $A_2\/A_0$ \\\\ \\hline \n\\multirow{2}{*}{$t\\bar{t}$} \n& 2-jet & $-0.016\\pm0.03$ & $+0.005\\pm 0.001$ & $-0.07\\pm0.01$ & $-0.021\\pm0.004$ & $-0.08\\pm0.01$ & $-0.035\\pm0.006$ & $-0.07\\pm0.01$ & $-0.05\\pm0.01$ \\\\\n& 3-jet & $-0.08\\pm0.01$ & $+0.009\\pm0.002$ & $-0.13\\pm0.02$ & $-0.018\\pm0.003$ & $-0.13\\pm0.02$ & $-0.048\\pm0.008$ & $-0.12\\pm0.02$ & $-0.07\\pm0.01$ \\\\ \\hline\n$\\st{}\\st{}^*$\n& 2-jet & $-0.0023\\pm0.0003$ & $+0.07\\pm0.01$ & $-0.06\\pm 0.01$ & $+0.08\\pm 0.01$ & $-0.07\\pm0.01$ & $+0.12\\pm0.02$ & $-0.06\\pm0.02$& $+0.15\\pm0.02 $ \\\\ \n(175,1)\n& 3-jet & $-0.07\\pm0.01$ & $+0.10\\pm0.02$ & $-0.12\\pm0.02$ & $+0.12\\pm0.02$ & $-0.12\\pm0.02$ & $+0.18\\pm0.03$ & $-0.11\\pm0.02$ & $+0.25\\pm0.04$ \\\\ \\hline\n$\\st{}\\st{}^*$\n& 2-jet & $+0.007\\pm0.001$ & $+0.07\\pm0.01$ & $-0.05\\pm0.01$ & $+0.07\\pm0.01$ & $-0.06\\pm 0.01$ & $+0.11\\pm 0.02$ & $-0.05\\pm0.01$ & $+0.15\\pm0.02$ \\\\ \n(200,25) \n& 3-jet & $-0.06\\pm0.01$ & $+0.10\\pm0.02$ & $-0.10\\pm0.02$ & $+0.12\\pm0.02$ & $-0.11\\pm0.02$ & $+0.17\\pm0.03$ & $-0.09\\pm0.02$& $+0.24\\pm0.04$ \\\\ \\hline\n\\end{tabular}\n\\end{footnotesize}\n\\caption{Best-fit values for the $\\cos\\Delta\\phi$ and $\\cos\n (2\\Delta\\phi)$ coefficients\n defined in Eq.~\\eqref{eq:diffsigma}. The fits are\n performed at parton level, corresponding to \n Figure~\\ref{fig:partondphi}. The 3-jet matched (2-jet unmatched) top background sample before any\n cuts consists of $1.95 \\times 10^6$ ($3.09 \\times 10^6$) events, the $\\mst =175$~GeV\n stop sample is $5.65 \\times 10^5$ ($6.18 \\times 10^6$) events, and the $\\mst =200$~GeV\n sample is $1.08\\times 10^6$ ($9.24\\times 10^5$) events.}\n\\label{tab:partonvbf}\n\\end{table}\n\nIn Figure~\\ref{fig:partondphi} we plot the normalized $\\Delta\\phi$\ndistributions between the two highest-$p_T$ parton--level tagging jets\ndefined in the laboratory frame, requiring $\\Delta \\eta_{jj} > 1$,\n2, 3, and 4 in the successive panels. As can be seen, there is a clear difference between the\ntagging jet correlations from stop and top events, corresponding to\nthe sign of the $\\cos (2\\Delta\\phi)$ term. It induces a clearly\nvisible minimum in the stop sample around $\\Delta \\phi = \\pi\/2$, especially\nnoticeable when compared to the slight excess here in the top sample.\nTop pairs are dominated by a slight preference\nfor back-to-back tagging jets.\n\nWithout a $\\Delta \\eta_{jj}$ cut, the non-trivial azimuthal dependence\nwould be highly suppressed. This is expected, since central jets do\nnot predominantly come from the ISR diagrams and do not reflect\ninformation about the helicity of fusing gluons through interference\npatterns in our reference frame. As we enforce increasingly large $\\Delta\\eta_{jj}$ cuts we\nsee a finite $\\cos (2\\Delta\\phi)$ component develop in both the top\nand stop samples; with the appropriate signs for fermionic and scalar\npairs.\n\\bigskip\n\nIn Table~\\ref{tab:partonvbf}, we show the relative size of the\n$\\cos\\Delta\\phi$ ($A_1$) and $\\cos (2\\Delta\\phi)$ ($A_2$) modes for\nthe top background and stop benchmark points, normalized to the\nconstant term $A_0$. The coefficients are obtained from the normalized\nten--bin histograms at parton level, using the standard {\\tt ROOT} fitting\nalgorithm. It is apparent that the non-trivial $A_2$ term is present\nin the unmatched two-jet sample, and survives after the addition of a\nthird jet in the matching scheme. The magnitude of the $A_1$ term\nsignificantly increases for the matched samples. \n\nComparing the events with three merged jets and the events with\nonly two hard jets we see that the merged sample shows an\nadditional shift towards larger azimuthal tagging jet separation. The\nreason is that with a third jet recoiling against the hard top or stop\npair system we now have a choice to pick the two tagging jets. We\nsystematically bias the selection towards an effectively larger\n$\\Delta \\eta_{jj}$ separation translating into more back-to-back\ntagging jets. However, this shift mostly affects the $\\cos \\Delta\n\\phi$ distribution, while the critical $\\cos (2\\Delta\\phi)$ mode is\nsymmetric around $\\Delta \\phi = \\pi\/2$ and therefore just slightly\ntilted. The fact that for top pair production the kinematic effect\nfrom additional jet radiation looks similar to the $\\cos \\Delta \\phi$\nmode from spin correlations explains the surprising finding of \nRef.~\\cite{skands} that the parton shower simulation seems to capture\nsome of the expected spin correlations while it should not.\n\nThe size of $A_2$ is only slightly affected by the\ndifferent simulational approaches shown in Table~\\ref{tab:partonvbf}, {\\sl i.e.} \\, \nthe theory-driven unmerged 2-jet setup and the more realistic merged\n3-jet case. If anything, the effect in $\\cos (2\\Delta\\phi)$ is more \npronounced in the multi-jet case, contrary to what is observed as \nazimuthal decorrelation in 2-jet production. The two stop mass benchmarks are\nconsistent with each other. Already for $\\Delta \\eta_{jj} >2$ we\nobserve the expected sign difference between the fermionic and scalar\nprocesses. It will become an experimental issue how wide a\nrapidity separation of the two tagging jets is needed to extract the\nmost information with a limited sample size.\n\n\\section{Stop Searches}\n\\label{sec:searches} \n\n\\begin{table}[b!]\n\\begin{tabular}{ll|c|c|c|c}\n\\hline\n & & \\multicolumn{2}{c|}{$|\\eta_j|<2.5,~|\\Delta \\eta_{jj}|>2$} & \\multicolumn{2}{c}{$|\\eta_j|<4.5,~|\\Delta \\eta_{jj}|>3$} \\\\\n & & di-leptonic & semi-leptonic & di-leptonic & semi-leptonic \\\\ \\hline \n\\multirow{5}{*}{$t\\bar{t}$} & leptons & 3.2\\% & 29\\% & 3.2\\% & 29\\%\\\\\n & +$b$-tag \\& jets & 0.17\\% & 0.98\\% & 0.23\\% & 1.5\\%\\\\\n & +$W$-mass & -- & 0.19\\% & -- & 0.25\\%\\\\\n & +$|\\Delta \\eta|$ & 0.053\\% & 0.066\\% & 0.061\\% & 0.064\\%\\\\\n & Final $\\sigma$ & 505~fb & 629~fb & 582~fb & 610~fb\\% \\\\ \\hline\n\\multirow{5}{*}{$\\st{} \\st{}^*$ (175,1)} & leptons & 3.3\\% & 29\\% & 3.3\\% & 29\\%\\\\\n & +$b$-tag \\& jets & 0.14\\% & 0.87\\% & 0.19\\% & 1.3\\%\\\\\n & +$W$-mass & -- & 0.17 \\% & -- & 0.23\\%\\\\\n & +$|\\Delta \\eta|$& 0.041\\% & 0.060\\% & 0.048\\% & 0.058\\% \\\\\n & Final $\\sigma$ & 55~fb & 81~fb & 65~fb & 78~fb \\\\ \\hline\n\\multirow{5}{*}{$\\st{} \\st{}^*$ (200,25)} & leptons & 3.3\\% & 29\\% & 3.3\\% & 29\\%\\\\\n& +$b$-tag \\& jets & 0.17\\% & 1.1\\% & 0.23\\% & 1.6\\%\\\\\n & +$W$-mass & -- & 0.22\\% & -- & 0.28\\%\\\\\n & +$|\\Delta \\eta|$ & 0.050\\% & 0.076\\% & 0.057\\% & 0.069\\% \\\\ \n & Final $\\sigma$ & 36~fb & 55~fb & 41~fb & 50~fb \\\\ \\hline\n\\end{tabular}\n\\caption{Cumulative efficiencies, including branching\n ratios, after detection selection criteria, in both di- and\n semi-leptonic channels. Also shown is the cross section\n after all cuts are applied. The ``leptons'' cut requires two (one) $e$ or $\\mu$\n for the di-lepton (semi-leptonic) channel. Two $b$-tagged and two (four)\n or more non-$b$-tagged jets are required to pass ``$b$-tag \\& jets,'' and the semi-leptonic\n $W$-mass reconstruction is defined in the text. The final $|\\Delta \\eta|$ criteria\n is applied for both jet selection criteria as defined\n in Eq.~\\eqref{eq:jet_def}.}\n\\label{tab:efficiencies}\n\\end{table}\n\n\nThe results obtained in the last section at parton level and using Monte--Carlo truth\nclearly demonstrate the analytic argument of Section~\\ref{sec:spin}.\nOnce all helicity information is taken into account and kinematic cuts\nrestrict events to the VBF phase space, the stop events have a\npositive coefficient $A_2$, while the top background has a negative\n$A_2$. However, these results do not yet demonstrate that this\ndifference between scalars and fermions can be used to enhance the\nstop sample among tops in a real experiment. One might worry that the identification of the \ntagging jets, combinatorics, or detector effects could wash out\nthese correlations and make them experimentally invisible.\\bigskip\n\nTo confirm the experimental accessibility of the azimuthal correlation\nas a way to separate top pairs from stop pairs we now hadronize the\nparton level event samples with {\\tt Pythia} and apply the fast\ndetector simulation {\\tt Delphes3}~\\cite{delphes} with\nconfiguration files provided by the Snowmass Energy Frontier\nsimulations~\\cite{snowmass}. Jets\nare clustered using the anti-$k_T$~\\cite{fastjet} algorithm\nwith $R = 0.5$. All decays are included via {\\tt Pythia}, so we do\nnot systematically account for spin correlations and interference\npatterns in the production and decay processes. From the last section\nit is clear that the details of the top and stop decays play no role\nin our analysis, beyond triggering and combinatorial challenges. In\nour analysis we include both semi-leptonic and di-leptonic top pair\ndecays. Fully hadronic decays of tops could be added once we resolve\nQCD and combinatorical issues, discussed for example in\nRefs.~\\cite{combinatorics}.\n\nWe generate the equivalent of 4.8~fb$^{-1}$ of 14 TeV LHC data for the\ntop background and both stop signal points. Although this is much less than the\nplanned integrated luminosity of the next stage of LHC running, generating the corresponding full data set\nwould be extremely resource intensive and not essential for purposes of demonstrating\nthe feasibility of the $\\Delta\\phi$ technique. Indeed, as we will show below, even\nwith only $\\sim 5$ fb$^{-1}$, the interference effect can already make stops known\nin the top sample, though additional luminosity would be required to improve\nthe statistical significance.\\bigskip\n\nDepending on the assumed decay channel we require one or two electrons\nand muons, required to have \n\\begin{alignat}{5}\np_{T,\\ell} > 20~{\\ensuremath\\rm GeV} \\quad \\text{and} \\quad |\\eta_\\ell|<2.5 \\; . \n\\end{alignat}\nRegardless of the selection criteria of forward\njets, we require exactly two $b$-tagged jets \nwith \n\\begin{alignat}{5}\np_{T,b}> 50~{\\ensuremath\\rm GeV} \\quad \\text{and} \\quad |\\eta_b|<2.5 \\; ,\n\\end{alignat}\nusing the {\\tt Delphes3} efficiency of approximately 70\\% per $b$-tag.\nFor the upcoming 14~TeV runs of the LHC, where pile-up and jet energy\ncalibration might be an issue, we follow two potential choices for the\njet requirements,\n\\begin{alignat}{5}\n(1) \\qquad p_{T,j} &> 20~{\\ensuremath\\rm GeV} \\quad \\mbox{and} \\quad |\\eta_j| <2.5 \\notag \\\\\n(2) \\qquad p_{T,j} &> 20~{\\ensuremath\\rm GeV} \\quad \\mbox{and} \\quad |\\eta_j| <4.5 \\; .\n\\label{eq:jet_def}\n\\end{alignat} \nWhile the conservative assumption will prove to be sufficient to\nreveal the presence of degenerate stops, including tagging jets to\n$|\\eta|<4.5$ will improve the physics reach in\nthis type of search.\n\nFor the di-leptonic channel, we require two or more light-flavor\njets. In the semi-leptonic channel we require four or more jets. Due\nto limited statistics, in the di-leptonic channel we do not subdivide the events \ninto different lepton flavor\ncombinations, though this could be useful for a full experimental\nanalysis. Similarly, a full experimental analysis might find it\nuseful to include a systematic multi-jet analysis for tagging jets as well\nas decay jets~\\cite{moments_wbf},\nbut in this paper we limit ourselves to the cleanest possible\nsignature.\n\nTo differentiate the $W$-decay jets from the VBF tagging jets in the\nsemi-leptonic channel, we suggest the following reconstruction\nalgorithm: of all pairs of central ($|\\eta_j|<1$) jets passing\na staggered cut $p_{T,j} > 60,30$~GeV we take the pair with an invariant mass\nclosest to $m_W$. If an event has such a pair of jets and their\ninvariant mass is within 30~GeV of the $m_W$, it is retained for the\nVBF selection criteria. The two highest-$p_T$ QCD jets remaining must then have an\ninvariant mass of either less than 50~GeV or greater than 100~GeV, to\navoid possible misidentification with the $W$-boson decay\nproducts. This strict set of requirements provides a very clean sample\nof events where the two VBF jets are well separated from all other\nhadronic activity in the detector, though the efficiency is\ncorrespondingly low, and improvements on this algorithm are obviously possible.\n\nThe highest-$p_T$ non-$W$-tagged jets in the semi-leptonic sample and the\nhighest-$p_T$ jets in di-leptonic events are likely be the two tagging jets, so we apply\nthe $\\Delta\\eta_{jj}$ cut. In the conservative jet selection\nscenario~(1) with $|\\eta_j|<2.5$ we only require $|\\Delta \\eta_{jj}| >\n2$, in order not to cut too deeply into the efficiency. For the more\noptimistic situation~(2) with $|\\eta_j|<4.5$ we can also require a\nlarger jet separation: $|\\Delta \\eta_{jj}| > 3$. From all events\npassing this final cut we construct the $\\Delta\\phi$ distribution. The\nfinal efficiencies and effective cross sections for both the di- and\nsemi-leptonic channels are shown in\nTable~\\ref{tab:efficiencies}, including the efficiencies of each cut\nleading up to the final $\\Delta \\eta$ selection.\\bigskip\n\n\\begin{table}[t]\n\n\\begin{tabular}{cc|c|c|c|c} \\hline\n & & \\multicolumn{2}{c|}{$|\\eta_j|<2.5,~|\\Delta \\eta_{jj}|>2$} & \\multicolumn{2}{c}{$|\\eta_j|<4.5,~|\\Delta \\eta_{jj}|>3$} \\\\ \n & & di-leptonic $A_2\/A_0$ & semi-leptonic $A_2\/A_0$ & di-leptonic $A_2\/A_0$ & semi-leptonic $A_2\/A_0$ \\\\ \\hline \n\\multicolumn{2}{c|}{$t\\bar{t}$} & $-0.10 \\pm 0.03$ & $-0.05\\pm 0.03$ & $-0.12\\pm 0.03$ & $-0.08 \\pm 0.03$ \\\\ \\hline\n\\multirow{2}{*}{$\\st{} \\st{}^*$ (175,1)} & $\\st{} \\st{}^*$ only & $+0.20\\pm0.09$ & $+0.10\\pm0.07$ & $+0.16\\pm0.09$ & $+0.18 \\pm 0.07$ \\\\ \n & $\\st{} \\st{}^* + t\\bar{t}$ & $-0.07\\pm0.03$ & $-0.03\\pm0.02$& $-0.09 \\pm 0.03$& $-0.05\\pm 0.02$ \\\\ \\hline\n\\multirow{2}{*}{$\\st{} \\st{}^*$ (200,25)} & $\\st{} \\st{}^*$ only & $+0.22\\pm0.11$ & $+0.03\\pm0.08$ & $+0.18 \\pm 0.11$ & $+0.16\\pm 0.10$ \\\\\n & $\\st{} \\st{}^* + t\\bar{t}$ & $-0.08\\pm0.03$ & $-0.04\\pm0.01$ & $-0.10 \\pm 0.03$ & $-0.06\\pm 0.03$ \\\\ \\hline\n\\end{tabular}\n\n\\caption{Best-fit values for the $\\cos\n (2\\Delta\\phi)$ coefficients $A_2$, normalized to the constant term $A_0$,\n defined in Eq.~\\eqref{eq:diffsigma}, for di-leptonic and semi-leptonic events\n corresponding to 4.8~fb$^{-1}$ of luminosity, after fast detector simulation. \n Fits to the two stop signal points are performed for signal only as well as\n signal plus top background.}\n\\label{tab:delphesvbf}\n\\end{table}\n\nBased on the 4.8~fb$^{-1}$ of simulated signal and background data\n given in Table~\\ref{tab:delphesvbf}, we can extrapolate\nwhat integrated luminosities would be required to observe a significant number\nof stop pair events inside the top sample. Clearly, the statistical errors from 5~fb$^{-1}$\nof integrated luminosity would be too large to make any statement,\nas the difference between the background distribution and the background\nplus signal is equivalent to the fit uncertainties. \n\nHowever, by taking the central fit values of the $d\\sigma\/\\Delta\\phi$ differential\ndistribution as the `true' parameter values, we can determine the statistical \npower for a given amount of data. \nThe luminosity from the first year of LHC14 running is expected to be around 25~fb$^{-1}$. \nThis data set would reduce the statistical errors on the $A_2\/A_0$ parameter to \napproximately $1\\%$. This would allow a $\\sim 1.5\\sigma$ statistical\ndifferentiation between background and background plus signal for 175~GeV\ntops in the di-leptonic channel ($\\sim 1\\sigma$ for 200~GeV stops) in the current detector \nconfiguration, and somewhat less in the semi-leptonic channel. With improved jet tracking in the forward region, this might be\nimproved to $1.7\\sigma$ with a year's luminosity. With a data set of 100~fb$^{-1}$, \n$3.2\\sigma$ observation would be possible in both channels for 175~GeV\nstops, and $2\\sigma$ discovery for 200~GeV stops, assuming the\nconservative $|\\eta|$ requirements. This would be improved to\n$3.7\\sigma$ for 175 GeV ($2.4\\sigma$ for 200 GeV) stops assuming the\ndetector performance allows for $|\\eta| <4.5$ in the tagging jets.\n\nSuch statements do not include systematic errors, which are clearly of concern for an observable \nso dependent on jet reconstruction and identification. However, analysis of tagging jets\nhas already been proven to work in Higgs studies with the 8~TeV run.\nMoreover, as noted in this paper, several handles are available to allow experimental \ncontrol of these issues. The signal will be visible in \nboth semi-leptonic and di-leptonic decays, and with sufficient luminosity the\ndi-leptonic channel could be further broken down into the different flavor\ncombinations. The turn-on of the non-trivial $A_2$ signal as the $\\Delta \\eta$\ncut is instituted provides an important cross check, and it is possible\nthat selection cuts intended to isolate the $\\beta$ dependence~\\cite{kaoru} of the\ntop and stop signals will also define useful side-bands.\n\n\n\\section{Conclusions}\n\\label{sec:conclusion}\n\nDuring the first LHC run tagging jets have been shown to be powerful tools\nin observing Higgs decays to photons, $W$-boson, and tau-leptons. In the \ncoming LHC runs with almost twice the collider energy their role will become\neven more pronounced, also reaching beyond Higgs analyses.\nSimilar to the spin and CP studies based on weak--boson--fusion Higgs\nevents~\\cite{delta_phi,higgs_spin}, we can test top quark properties\nin top pair production with two forward\njets~\\cite{kaoru,matt_michael}. This tagging jet analysis has the general \nadvantage that it does not rely on the reconstruction of the hard process, in our\ncase the top pair. Instead, we can use the dependence on the azimuthal angle\n$\\Delta\\phi$ between the tagging jets to search for non-standard events in the top\nsample at the LHC. Specficially, the coefficient $A_2$ of the $\\cos (2 \\Delta \\phi)$ \nterm in the distribution is negative for top pair production, whereas \nlight scalar top pairs will give a significant positive\ncontribution to this observable.\\bigskip\n\nWe first showed how the different signs can be understood in terms of\nthe gluon helicity combinations contributing to the total rate. We\nthen established and tested a non-standard {\\tt MadGraph5} setup which\nallows us to simulate events with all angular correlations between the\nISR tagging jets intact. Using this modified generation tool we showed\nthat the precision on the extraction of $A_2$ increases with the\nrapidity separation of the tagging jets. We also saw that the $A_2$\nmode is not sensitive to the details of the ISR tagging jet simulation\nand the model parameters in the stop decays. Finally, we estimated\nthat such an analysis should give $>3\\sigma$ results in multiple channels\nwith around 100~inverse femtobarns of data at a 14~TeV LHC. Because the analysis is purely\nbased on the tagging jets is can be generalized to any hard process in\nand beyond the Standard Model.\\bigskip\n\n\\begin{center}\n{\\bf Acknowledgments}\n\\end{center}\n\nWe would like to thank Stefan Prestel for checking that our {\\tt\n MadGraph5} simulation makes sense. MB would like to\nthank Maria Spiropulu, Joe Lykken, Yuri Gershtein, and John-Paul Chou\nfor helpful discussion and resources. MB and MJRM thank the Aspen\nCenter for Physics, where this project was originally conceived,\nwhile TP foolishly skipped the workshop. Finally, TP would like to \nthank Frank Krauss for deep insights into azimuthal decorrelation. This work was supported in part by U.S. Department of Energy contract DE-SC0011095 (MJRM).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\n\nThe very first stage in a high-energy heavy-ion collision is dominated by extremely strong {\\it chromo-electromagnetic} (chromo-EM) fields reflecting colliding nuclei filled with high-density gluons (color glass condensate). Such a state with strong fields is called a ``glasma'' which is named since it is a transitional state between a color {\\it glass} condensate (before the collision) and a quark-gluon {\\it plasma} (QGP)~\\cite{Lappi:2006fp}. The glasma is characterized by a field strength ${\\cal F}$ of the order of the saturation scale: $g{\\cal F}\\sim Q_{s}^2$ (with $g$ being the QCD coupling). Notice that the saturation scale $Q_s$ is a semihard scale representing a typical transverse momentum of gluons in a colliding nucleus and can become large enough, at high energies, compared to light quark masses $Q_s\\gg m_q$. Besides, it has long been known that heavy-ion collisions, with electrically charged nuclei, are accompanied by {\\it electromagnetic} (EM) fields, but only recently was it seriously recognized that the strong EM fields could affect time evolution of heavy-ion collision events since the strength $F$ of the EM fields could be as large as or even greater than the nonperturbative QCD scale $\\Lambda_{\\rm QCD}$, namely $eF\\simge \\Lambda_{\\rm QCD}^2$ and thus $eF\\gg m_q^2$ \\cite{Kharzeev:2007jp, Skokov:2009qp, Bzdak:2011yy, Deng:2012pc}. Since both the chromo-EM and EM fields created in heavy-ion collisions can be strong enough compared with the light quark masses, the effects of strong fields cannot be treated as perturbation (even though the coupling constants are small), but must be treated in a nonperturbative way. Then we expect nonlinear and nonperturbative phenomena associated with the strong fields to occur. Typical examples of such phenomena include particle productions (quarks, antiquarks and gluons) from these strong fields (the Schwinger mechanism), which must be a key towards understanding the formation of QGP.\n\n\nWhile the (coherent) chromo-EM fields will disappear as the QGP is formed, the EM fields could survive longer due to Faraday's law, which works in the presence of a conducting medium~\\cite{Tuchin:2013ie, Gursoy:2014aka}. If the EM fields survive at a strong enough level until the formation of QGP, and even until the end of the QGP's lifetime, we need to describe the QCD phase transition \nwith the effects of strong EM fields taken into account. Notice that the effects of strong {\\it magnetic} fields on thermodynamical or fundamental quantities of QGP can be investigated in lattice QCD simulations, and are indeed found to be large. For example, at zero temperature, lattice QCD simulations confirmed the ``magnetic catalysis'' as predicted in several effective models \\cite{Gusynin:1994re, Gusynin:1994xp, Kashiwa:2011js, Gatto:2010pt, Kamikado:2013pya, Cohen:2007bt, Andersen:2012dz, Andersen:2012zc} in which the value of chiral condensate increases with increasing magnetic field strength. On the other hand, at finite temperature, lattice QCD simulations almost at the physical point concluded \\cite{Bali:2012zg, Bali:2011qj} that the magnetic catalysis does not necessarily occur at all the temperature regions, but rather gets weakened and even shows opposite behavior with increasing temperature. Such behavior of the chiral condensate around the critical temperature is called ``magnetic inhibition'' \\cite{Fukushima:2012kc} or ``inverse magnetic catalysis'', which eventually gives rise to decreasing critical temperature.\nFor recent reviews on the phase diagram of chiral phase transitions in strong magnetic fields, see, e.g., Refs.~\\cite{Andersen:2014xxa, Miransky:2015ava}.\nFurthermore, it is reported~\\cite{Bruckmann:2013oba} that the (pseudo)critical temperature of the confinement-deconfinement phase transition (for the Polyakov loop) also decreases with increasing magnetic field. This is achieved by increasing Polyakov loop expectation values. Probably, these two phenomena are related to each other. However, so far, there is no clear explanation about the physical mechanism behind this (for recent attempts, see Refs.~\\cite{Kojo:2012js, Kojo:2013uua} and \\cite{Braun:2014fua, Mueller:2015fka}).\n\n\nWe can investigate these two aspects, namely the nonlinear and nonperturbative dynamics of strong fields (including particle production) and the phase transition under strong external fields, within a single framework of an effective action. So far, effective actions for QED and QCD in various external conditions have been extensively explored. First of all, Euler and Heisenberg derived a nonlinear effective action for constant EM fields at the electron's one-loop level, known as the Euler-Heisenberg (EH) action~\\cite{Heisenberg:1935qt}. Later, Schwinger reproduced the same action in a field-theoretical manner, which is the so-called Schwinger proper time method~\\cite{Schwinger:1951nm}. The EH action at finite temperature is computed in imaginary time formalism~\\cite{Dittrich:1979ux, Gies:1998vt} as well as in real time formalism~\\cite{Cox:1984vf, Loewe:1991mn}. Furthermore, an analog of the EH action in QCD (for chromo-EM fields) has been evaluated too within a similar method at zero and finite temperatures \\cite{Savvidy:1977as, Matinyan:1976mp, Nielsen:1978rm, Leutwyler:1980ma, Schanbacher:1980vq, Dittrich:1983ej, Cea:1987ku, Cho:2002iv, Dittrich:1980nh, Gies:2000dw}. Lastly, the most recent progress was to compute the EH action at zero temperature when both the EM and chromo-EM fields are present, which was done by one of the authors and B.~V.~Galilo and S.~N.~Nedelko independently~\\cite{Ozaki:2013sfa, Galilo:2011nh}. The author of Ref.~\\cite{Ozaki:2013sfa} used this effective action to investigate the QCD vacuum (gluon condensate) in the presence of strong magnetic fields. Though all of these are about the effective action for strong fields and choromo-EM condensates, it should be possible to include the Polyakov loop at finite temperature. \nIndeed, an effective action (or potential) for the Polyakov loop at the one-loop level was computed independently by D.~J.~Gross, R.~D.~Pisarski, and L.~G.~Yaffe~\\cite{Gross:1980br}, and by N.~Weiss~\\cite{Weiss:1980rj, Weiss:1981ev}, and the result is called the Weiss potential.\nIn the present paper, we are going to derive an analog of the EH effective action in QCD+QED at finite temperature with the Polyakov loops included. Thus, the result may be collectively called the ``Euler-Heisenberg-Weiss action.\" Our result is also a generalization of the one obtained by H.~Gies~\\cite{Gies:2000dw}, who computed an effective action for the Polyakov loop and the chromo-electric field.\n\n\nThe paper is organized as follows: In the next section, we will derive the effective action for QCD+QED at finite temperature by using the Schwinger proper time method. \nVariables of the effective action are the EM and chromo-EM fields as well as the Polyakov loop, and one can reproduce the previous results (the EH action with QCD+QED fields, the Weiss potential, etc.) in various limits. Then, we discuss some applications of our effective action in Sec. III. First, we investigate quark-antiquark pair production in QCD+QED fields at zero temperature. We obtain the quark production rate in the presence of QCD+QED fields, which allows us to study the quark pair production with arbitrary angle between the EM and chromo-EM fields. Next, we study an effective potential for the Polyakov loop with electromagnetic fields. We find that the magnetic field enhances the explicit center symmetry breaking, while the electric field reduces it. This indicates that the (pseudo)critical temperature of the confinement-deconfinement phase transition decreases (increases) with increasing magnetic (electric) field. Finally, we conclude our study in Sec. IV.\n\n\\begin{comment}\n\\com{[below has been totally rewritten from here]}\n\nIn relativistic heavy ion collisions, extremely strong chromo-electromagnetic fields, so-called glasma \\cite{Lappi:2006fp}, are generated.\nThe strength of the chromo-electromagnetic field might be of order of the saturation scale $\\sim Q_{s}$.\nFurthermore, it has been recognized that strong electromagnetic fields are also created in relativistic heavy ion collisions, whose strengths would reach QCD scale $\\Lambda_{QCD}$ or even exceed it \\cite{Kharzeev:2007jp, Skokov:2009qp, Bzdak:2011yy, Deng:2012pc}. \nTherefore, at the initial stage of relativistic heavy ion collisions, both strong chromoelctromagnetic fields and electromagnetic fields can coexist.\nAfter the impact of the heavy ion collision, matters namely quarks and gluons are produced from the strong chromoelectromagnetic fields (glasma) and thermalized into quark and gluon plasma (QGP).\nProduction mechanisms of matters are thus key to understand the initial condition of relativistic heavy ion collisions and the formation of QGP.\nElectromagnetic fields created there could also give large contributions to quark productions.\n\n\nAs the temperature decreases, QCD phase transitions such as confinement-deconfinement phase transition and chiral phase transition will occur.\nIf the strong fields are still remaining during phase transitions, the fields must strongly affect QCD phase transitions.\nIn association with phase transitions in strong fields, lattice QCD calculations can simulate strongly interacting quark and gluon systems in the presence of strong magnetic fields.\nThey are able to explore $T$-$B$ phase diagram of QCD without notorious sign problem as appeared in finite density lattice QCD.\n\nSeveral effective models including NJL type models \\cite{Gusynin:1994re, Gusynin:1994xp, Kashiwa:2011js, Gatto:2010pt} and chiral perturbation theory \\cite{Cohen:2007bt, Andersen:2012dz, Andersen:2012zc} predict an increasing chiral condensate in the presence of magnetic fields at zero temperature.\nThis phenomenon is known as magnetic catalysis.\nLattice QCD indeed observe magnetic catalysis in both quenched approximation and full QCD simulations.\nHowever at finite temperature, there is a discrepancy between effective models and lattice QCD.\nNamely, lattice QCD at physical point predicts a decreasing critical temperature of chiral phase transition in strong magnetic fields \\cite{Bali:2012zg, Bali:2011qj},\ncalled magnetic inhibition or inverse magnetic catalysis,\nwhile effective models fail to reproduce that.\nFurthermore, F. Bruckmann et. al. \\cite{Bruckmann:2013oba} report that the (pseudo)-critical temperature of confinement-deconfinement phase transition also decreases with increasing magnetic field.\nProbably, those two inverse magnetic catalysises are related to each other.\n\nRecently, authors in \\cite{Braun:2014fua} and \\cite{Mueller:2015fka} successfully reproduce the inverse magnetic catalysis of chiral sector from functional approaches such as Dyson-Schwinger equations and functional renormalization group. \nYet, the physical mechanism behind the inverse magnetic catalysis is not so clear and thus still under discussion.\n\n\nIn order to investigate systems where strong QED and QCD fields coexist, an effective action of the theories can be an important tool.\nMoreover, an effective action containing a order parameter is also useful to study a phase transition.\nSo far, effective actions for QED and QCD in various external conditions have been extensively explored.\nEuler and Heisenberg firstly derive the non-linear effective action for QED at one-loop level, known as the Euler-Heisenberg action \\cite{Heisenberg:1935qt}.\nSchwinger reproduces the same action in a field theoretical manner, which is so-called the Schwinger's proper time method \\cite{Schwinger:1951nm}.\nThe Euler-Heisenberg action at finite temperature has been obtained from imaginary time formalism \\cite{Dittrich:1979ux, Gies:1998vt} as well as real time formalism \\cite{Cox:1984vf, Loewe:1991mn}.\nQCD effective action is also evaluated at zero and finite temperatures in literatures \\cite{Savvidy:1977as, Matinyan:1976mp, Nielsen:1978rm, Leutwyler:1980ma, Schanbacher:1980vq, Dittrich:1983ej, Cea:1987ku, Cho:2002iv, Dittrich:1980nh, Gies:2000dw}.\nRecently, one of the authors, and B. V. Galilo and S. N. Nedelko derive the Euler-Heisenberg action for QCD+QED at zero temperature \\cite{Ozaki:2013sfa, Galilo:2011nh}.\nBy using the effective action, the author of \\cite{Ozaki:2013sfa} investigates QCD vacuum in the presence of the strong magnetic fields.\nIn this paper, we derive effective action for QCD+QED at finite temperatures and discuss some applications of the action.\n\nThis paper is organized as follows.\nIn section II, we derive effective action for QCD+QED by using the Schwinger's proper time method.\nThen, we discuss some applications of our effective action in section III.\nFirst, we investigate quark-antiquark pair productions in QCD+QED fields at zero temperature.\nWe obtain the quark production rate in the presence of QCD+QED fields which allows us to study the quark pair production with arbitrary field configurations.\nNext we study an effective potential for the Polyakov loop with electromagnetic fields.\nWe find that the magnetic field enhances the explicit center symmetry breaking while the electric field reduces it.\nThis indicates that the (pseudo)-critical temperature of confinement-deconfinement phase transition decreases (increases) with increasing magnetic (electric) field.\nFinally, we conclude our study in Section IV.\n\n\\com{[up to here]}\n\\end{comment}\n\n\n\\section{one-loop effective action for QCD+QED at finite temperature}\n\n\nIn this section, we derive the one-loop effective action for QCD+QED at finite temperature. \nThe effective action will be a function of chromo-EM and EM fields, as well as the Polyakov loop. Notice that both the strong fields and the Polyakov loop can be treated as {\\it background fields} so that the background field method is applicable. We will take quantum fluctuations around the background fields up to the second order in the action, and integrate them in the path integral. This corresponds to computing the action at the one-loop level.\n\n\nWe shall begin with the four-dimensional QCD action of the SU$(N_{c})$ gauge group with $N_{f}$ flavor quarks interacting with EM fields:\n\\begin{eqnarray}\nS_{\\rm QCD+QED}\n&=& \\int d^{4}x \\left\\{-\\frac{1}{4} F_{\\mu \\nu}^{a} F^{a \\mu \\nu} - \\frac{1}{4} f_{\\mu \\nu} f^{\\mu \\nu} + \\bar{q} \\left( i \\gamma_{\\mu} D^{\\mu} - M_{q} \\right) q\\right\\} \\, ,\n\\label{QCD+QEDaction}\n\\end{eqnarray}\nwhere the covariant derivative contains gluon fields\\footnote{Throughout the paper, we use $a,b,c$ (and $h$) for adjoint color indices ($a,b,c=1, \\ldots,N_c^2-1$), $i$ for fundamental color indices $(i=1,\\ldots,N_c)$, $\\mu,\\nu,\\alpha,\\beta$ for Lorentz indices, and $f$ for flavor indices $(f=1,\\ldots,N_f)$.} $A^{a}_{\\mu}$ $(a=1,\\ldots,N_c^2-1)$ and U(1) gauge fields $a_{\\mu}$ as\n\\begin{eqnarray}\nD_{\\mu} = \\partial_{\\mu} - igA_{\\mu}^{a} T^{a} - ieQ_{q}a_{\\mu}\\, ,\n\\label{covderivative-all}\n\\end{eqnarray}\nand the gluon and EM field-strength tensors are given by\n$\nF_{\\mu \\nu}^{a}\n= \\partial_{\\mu} A_{\\nu}^{a} - \\partial_{\\nu}A_{\\mu}^{a} + gf^{abc} A_{\\mu}^{b} A_{\\nu}^{c} $ and $\nf_{\\mu \\nu}\n= \\partial_{\\mu} a_{\\nu} - \\partial_{\\nu} a_{\\mu}\\, ,\n$\nrespectively. In this paper, we treat the EM fields just as background fields, and assume that the field strengths are constant so that $\\partial f = 0$. We abbreviate color, flavor, and spinor indices of the quark field in Eq.~(\\ref{QCD+QEDaction}).\nMass and charge matrices of quarks are given by $M_{q} = {\\rm{diag}}(m_{q_{1}}, m_{q_{2}}, \\ldots, m_{q_{N_{f}}} )$ and $Q_{q} = {\\rm{diag}}( Q_{q_{1}}, Q_{q_{2}}, \\ldots, Q_{q_{N_{f}}} )$. As for the gluon field, we apply the background field method and decompose the gluon field into a slowly varying background field ${\\cal A}_{\\mu}^{a}$ and a quantum fluctuation $\\tilde{A}_{\\mu}^{a}$ as\n\\begin{eqnarray}\nA_{\\mu}^{a} = {\\cal A}_{\\mu}^{a} + \\tilde{A}_{\\mu}^{a}\\, .\n\\end{eqnarray}\nHere we employ the covariantly constant field as a background field, which obeys the following condition \\cite{Batalin:1976uv, Gyulassy:1986jq, Tanji:2011di}:\n\\begin{eqnarray}\n{\\cal D}_{\\rho}^{ac} {\\cal F}^{c}_{\\mu \\nu} = 0\\, ,\n\\label{CondtionCovariantConstant}\n\\end{eqnarray}\nwhere the covariant derivative ${\\cal D}_\\mu$ is defined only with respect to the gluon background field: \n\\begin{eqnarray}\n{\\cal D}_{\\mu}^{ac} = \\partial_{\\mu} \\delta^{ac} + g f^{abc} {\\cal A}^{b}_{\\mu}\n\\, , \\label{covderivative-gluon}\n\\end{eqnarray} \nand ${\\cal F}_{\\mu \\nu}^{a} = \\partial_{\\mu} {\\cal A}_{\\nu}^{a} - \\partial_{\\nu} {\\cal A}^{a}_{\\mu} + g f^{abc} {\\cal A}_{\\mu}^{b} {\\cal A}_{\\nu}^{c}$. \nFrom the condition (\\ref{CondtionCovariantConstant}), the field-strength tensor ${\\cal F}^{a}_{\\mu \\nu}$ can be factorized as\n${\\cal F}^{a}_{\\mu \\nu} = {\\cal F}_{\\mu \\nu} n^{a}$, where ${n}^a$ is a unit vector in color space, normalized as ${n}^{a}{n}^{a} = 1$,\nwhereas ${\\cal F}_{\\mu \\nu}$ expresses the magnitude of the chromo-EM field.\nWe further assume that ${\\cal F}_{\\mu \\nu}$ is very slowly varying, satisfying $\\partial_{\\sigma} {\\cal F}_{\\mu \\nu} = 0$, which allows us to obtain the analytic expression of the EH action for QCD, just as in QED. Both ${\\cal F}_{\\mu \\nu}$ and ${n}^a$ are space-time independent. The background field ${\\cal A}^a_\\mu$ is proportional to the color unit vector ${n}^a$ as\n\\begin{eqnarray}\n{\\cal A}_{\\mu}^{a}\n&=& {\\cal A}_{\\mu} {n}^{a}\\, ,\n\\label{BackgroundField}\n\\end{eqnarray}\nand the field-strength tensor ${\\cal F}_{\\mu \\nu}$ has an Abelian form, ${\\cal F}_{\\mu \\nu} = \\partial_{\\mu} {\\cal A}_{\\nu} - \\partial_{\\nu} {\\cal A}_{\\mu}.\n$\nThis background field (\\ref{BackgroundField}) indeed satisfies the condition (\\ref{CondtionCovariantConstant}).\nBy using the background field and the quantum fluctuation, the full gluon field-strength tensor can be decomposed as\n\\begin{eqnarray}\nF^{a}_{\\mu \\nu}\n&=& {\\cal F}_{\\mu \\nu} {n}^{a} + ( {\\cal D}_{\\mu}^{ac} \\tilde{A}_{\\nu}^{c} - {\\cal D}_{\\nu}^{ac} \\tilde{A}_{\\mu}^{c}) + gf^{abc} \\tilde{A}_{\\mu}^{b} \\tilde{A}_{\\nu}^{c}\\, .\n\\end{eqnarray} \nApplying the background gauge for the quantum fluctuation,\n\\begin{eqnarray}\n{\\cal D}^{ac}_{\\mu} \\tilde{A}^{c}_{\\mu} \n&=& 0\\, ,\n\\end{eqnarray}\nwe get the gauge fixed action in the presence of EM fields,\n\\begin{eqnarray}\nS_{\\rm QCD+QED}\n&=& \\int d^{4}x \n\\left[ - \\frac{1}{4} \\left\\{ \n {\\cal F}_{\\mu \\nu} {n}^{a} \n + \\left( {\\cal D}_{\\mu}^{ac} \\tilde{A}_{\\nu}^{c} \n - {\\cal D}_{\\nu}^{ac} \\tilde{A}_{\\mu}^{c} \\right) \n + g f^{abc} \\tilde{A}_{\\mu}^b \\tilde{A}_{\\nu}^{c} \n \\right\\}^{2} \n - \\frac{1}{2 \\xi} ( {\\cal D}^{ac}_{\\mu} \\tilde{A}^{c \\mu} )^{2} \\right. \n \\nonumber \\\\\n&& \\qquad \\quad \\left. \n - \\bar{c}^{a} \\left( {\\cal D}_{\\mu} D^{\\mu} \\right)^{ac} c^{c} \n + \\bar{q}\\left(i \\gamma_{\\mu} D^{\\mu} - M_{q} \\right) q \n - \\frac{1}{4} f_{\\mu \\nu} f^{\\mu \\nu}\n\\right]\\, ,\n\\end{eqnarray}\nwhere $c$ is the ghost field and $\\xi$ is the gauge parameter. \nNotice that one of the covariant derivatives in the ghost kinetic term $D_{\\mu}^{ac}$ and the one in the quark kinetic term $D_{\\mu}$ defined in Eq.~(\\ref{covderivative-all}) contain all the gauge fields.\nThe effective action for the background fields ${\\cal A}_\\mu$ \nand $a_\\mu$\ncan be obtained through the functional integral as \n\\begin{eqnarray}\n{\\rm{exp}} \\Big( i S_{\\rm eff}[{\\cal A}_\\mu, a_\\mu] \\Big)\n&\\equiv& \\int {\\mathscr D} \\tilde{A} {\\mathscr D}c {\\mathscr D} \\bar{c} {\\mathscr D}q {\\mathscr D} \\bar{q} \\ \\ {\\rm{exp}} \\left( i \\int d^{4}x S_{\\rm QCD+QED} \\right)\\, .\n\\end{eqnarray}\nWe perform the functional integral with fluctuations taken up to the second order. This corresponds to evaluating the one-loop diagrams as shown in Fig.~1. \n\\begin{figure}\n\\begin{minipage}{0.8\\hsize}\n\\begin{center}\n\\includegraphics[width=1.0 \\textwidth]{diagrams.pdf}\n\\vskip -0.1in\n\\end{center}\n\\end{minipage}\n\\caption{\nTypical loop diagrams contributing to the effective action. \nThe field $\\mathcal{A}$ contains both the chromo-EM fields and the Polyakov loop.\n}\n\\end{figure}\nThe gluon, ghost, and quark loop integrations can be separately done, and one finds, respectively,\n\\begin{eqnarray}\n&&\\!\\!\\int\\!\\! {\\mathscr D} \\tilde{A}\\, {\\rm{exp}}\n\\left\\{ \\int \\! d^{4}x \\frac{-i}{2} \\tilde{A}^{a \\mu} \\left[\n- ( {\\cal D}^{2})^{ac} g_{\\mu \\nu} - 2 g f^{abc} {\\cal F}^{b}_{\\mu \\nu} \n\\right] \\tilde{A}^{c \\nu} \\right\\\n\\! ={\\rm{det}} \\! \\left[ - ( {\\cal D}^{2})^{ac} g_{\\mu \\nu} \n - 2 g f^{abc} {\\cal F}_{\\mu \\nu}^{b} \n \\right]^{-\\frac12}, \\nonumber \\\\\n&&\\!\\!\\int\\! {\\mathscr D} c {\\mathscr D} \\bar{c} \\ {\\rm{exp}}\n\\left\\{ i \\int d^{4}x \\ \\bar{c}^{a} \\left[ - ( {\\cal D}^{2} )^{ac} \\right] c^{c} \\right\\}\n= {\\rm{det}} \\left[ - ( {\\cal D}^{2} )^{ac} \\right]^{+1}, \\label{full_actions} \n\\\\\n&&\\!\\!\\int\\! {\\mathscr D} q {\\mathscr D} \\bar{q} \\ {\\rm{exp}} \n\\left\\{ i \\int d^{4}x \\ \\bar{q} \\left(\ni \\gamma_{\\mu} \\hat{\\cal D}^{\\mu} - M_{q} \\right) q \\right\\}\n= {\\rm{det}} \\left[ i \\gamma_{\\mu} \\hat{\\cal D}^{\\mu} - M_{q} \\right]^{+1}.\n\\nonumber\n\\end{eqnarray}\nHere we have taken the Feynman gauge, $\\xi = 1$.\nIn the quark one-loop contribution, the covariant derivative $\\hat{\\cal D}_{\\mu}$ contains both of the background fields ${\\cal A}_\\mu$ and $a_\\mu$:\n\\begin{eqnarray}\n\\hat{\\cal D}_{\\mu}\n&=& {\\cal D}_{\\mu} -ieQ_{q} a_{\\mu}\\nonumber\\\\\n&=& \\partial_{\\mu} - ig {\\cal A}_{\\mu}^{a} T^{a} -ieQ_{q} a_{\\mu}\\, .\n\\label{CovariantDerivativeQuark}\n\\end{eqnarray}\nOn the other hand, the gluon and ghost one-loop contributions contain \n${\\cal D}_\\mu^{ac}$ and ${\\cal F}_{\\mu\\nu}^a$, which only depend on the gluon background field $\\mathcal{A}_\\mu$. This is, of course, because the gluon and ghost fields do not have electric charge and thus cannot interact with EM fields. Since these contributions are the same as in the pure Yang-Mills (YM) theory, we may call these the YM part.\n\n\nSo far, we have not specified the background field ${\\cal A}_\\mu$, but it can contain both the chromo-EM fields and the Polyakov loop. Let us briefly explain how the Polyakov loop is described within our framework. In the pure Yang-Mills theory at finite temperature, there is a confinement-deconfinement transition whose order parameter is given by the Polyakov loop. It is defined by the (closed) Wilson line along the imaginary time ($\\tau$) direction:\n\\begin{eqnarray}\n\\Phi (\\vec{x})\n&=& \\frac{1}{N_{c}} {\\rm{Tr}} \\ \\mathcal{P} \\ {\\rm{exp}} \\left\\{ ig \\int^{\\beta}_{0} d\\tau {A}_{4}^{a} (\\tau, \\vec{x}) T^{a} \\right\\}\\, ,\n\\label{defPolyakovLoop}\n\\end{eqnarray}\nwhere $\\beta = 1\/T$ is the inverse temperature and ${\\cal P}$ stands for a path-ordered product along the imaginary time direction. Indeed, $\\langle \\Phi \\rangle \\to 0$ ($\\langle \\Phi \\rangle \\neq 0$) corresponds to a confining (deconfined) phase, since the negative logarithm of the expectation value of the Polyakov loop can be identified with the free energy of a static quark (a vanishing value of the Polyakov loop implies that the energy of a single quark state is infinity). These two phases are distinguished by the center symmetry. The gauge fields at finite temperature are not necessarily periodic in the direction of imaginary time and can have ambiguity related to the center subgroup $Z_{N_c}$ of the gauge symmetry SU$(N_c)$. This residual symmetry is called the center symmetry and the theory is invariant under gauge transformations which differ at $\\tau = 0$ and $\\tau = \\beta$ by a center element of the gauge group. The Polyakov loop $\\Phi$ transforms as $\\Phi \\to {\\rm e}^{2\\pi i n\/N_{c}}\\Phi$ $(n=0,1,2, \\ldots, N_{c}-1)$. Thus, the values of $\\Phi$ distinguish the center symmetric (confining) phase and the center broken (deconfined) phase. Dynamical quarks, however, explicitly break the center symmetry. Therefore, in QCD, the Polyakov loop should be understood as an approximated order parameter. Still, we can compute an effective action for the Polyakov loop and discuss how a phase transition occurs when external parameters such as temperature are varied. \n\nAn effective action for the Polyakov loop in the pure Yang-Mills theory was obtained in Refs.~\\cite{Gross:1980br, Weiss:1980rj} in the following way:\nWorking in what we now call the ``Polyakov gauge\" for a time-independent field $A_4^a(\\vec x)=\\phi(\\vec x)\\delta^{a3}$ in the SU(2) case, \nthe authors of Refs.~\\cite{Gross:1980br, Weiss:1980rj} performed a functional integral with respect to fluctuations around the field $\\phi(\\vec x)$. \nThis procedure is nothing but the one we explained above where we treated the gluon field $A_\\mu^a$ as a background ${\\cal A}_\\mu^a$ with a fluctuation around it. Besides, as long as we consider a spatially homogeneous and time-independent order parameter $\\bar{\\mathcal{A}}_4^a$, we can have both the Polyakov loop and the chromo-EM fields at the same time. We divide the background field into the constant part and the coordinate-dependent part as \n$\\mathcal{A}_{\\mu}^{a}(x) = (\\bar{\\mathcal{A}}_{\\mu} + \\hat{\\mathcal{A}}_{\\mu}(x)) n^{a}$.\nThe second term gives the real (physical) chromo-EM fields so that $\\mathcal{F}_{\\mu \\nu}^{a} = \\partial_{\\mu} \\mathcal{A}_{\\nu}^{a}(x) - \\partial_{\\nu} \\mathcal{A}_{\\mu}^{a}(x) = (\\partial_{\\mu} \\hat{\\mathcal{A}}_{\\nu}(x) - \\partial_{\\nu} \\hat{\\mathcal{A}}_{\\mu}(x)) n^{a}$, while the first constant term $\\bar{\\mathcal{A}}_{\\mu}$ does not. \nWe want to treat both the chromo-EM fields and the Polyakov loop, and the latter is described at finite temperature. In order to have the both, we specify the transformation of the temporal component of the background field $\\mathcal{A}_0^a(x)$ under the Wick rotation of the coordinate, $x_{0} \\to -ix_{4} = -i\\tau$ and $x_{i} \\to x_{i} \\ (i=1,2,3)$, as follows:\n$\\mathcal{A}_{0}^{a}(x) = (\\bar{\\mathcal{A}}_{0} + \\hat{\\mathcal{A}}_{0}(x)) n^{a} \\to (i\\bar{\\mathcal{A}}_{4} + \\hat{\\mathcal{A}}_{0}(x)) n^{a}$.\nIn this way, the first term gives the Polyakov loop defined in Eq.~(\\ref{defPolyakovLoop}), while the second term remains unchanged to give the real chromo-EM fields.\nWe work in the Polyakov gauge for $\\bar{\\mathcal{A}}_{4}^{a}$ \\cite{Weiss:1980rj}\\footnote{In the literature, the fourth component of the gauge field $\\bar{\\mathcal{A}}^{a}_{4}$ in the Polyakov gauge is often expressed in terms of $N_{c}-1$ real scalar fields. In our formalism, these fields are properly encoded in the color eigenvalues $\\omega_{i} \\ (i=1, \\ldots, N_{c})$ and $v_{h} \\ (h=1, \\ldots, N_{c}^{2} -1)$, which will be defined later. Here, choosing the third \ndirection of the color unit vector---$n^{a} = \\delta^{a 3}$ at finite temperature---we pick up the one particular field $\\bar{\\mathcal{A}}_{4}$ which provides a simple expression for the Poyakov loop as shown in Eq.~(\\ref{simple_Polyakov_loop}). However, in the finial expression of our effective action, it is quite straightforward to keep all the $N_{c}-1$ scalar fields in the color eigenvalues $\\omega_{i}$ and $v_{h}$.}:\n\\begin{eqnarray}\n\\bar{\\mathcal{A}}_{4}^{a} = \\bar{\\mathcal{A}}_{4}\\, \\delta^{3 a}, \\ \\ \\ \\partial_{4} \\bar{\\mathcal{A}}_{4} = 0\\, , \n\\end{eqnarray}\nwhich does not conflict with the covariantly constant condition in Eq.~(\\ref{CondtionCovariantConstant}). Notice that we use this gauge with $\\delta^{a3}$ even for the SU($N_c$) case, and the color unit vector $n^a$ introduced in Eq.~(\\ref{BackgroundField}) should be understood as $n^a=\\delta^{3a}$ at finite temperature.\\footnote{Still, we keep the expression $n^a$ because we will discuss the case at zero temperature.} Following Ref.~\\cite{Weiss:1980rj}, we also introduce a dimensionless field $C$ as \n\\begin{eqnarray}\nC = \\frac{g {\\bar{\\mathcal{A}}_{4} } }{ 2 \\pi T }, \n\\end{eqnarray}\nso that the Polyakov loop is simply given as \n\\begin{eqnarray}\n\\Phi\n&=& {\\rm{cos}} (\\pi C) \\qquad \\qquad \\quad \\ \\ {\\rm{for \\ SU(2) }}\\, , \\nonumber \\\\\n\\Phi\n&=& \\frac{1}{3} \\Big\\{ 1 + 2{\\rm{cos}}( \\pi C) \\Big\\} \\quad \\ {\\rm{for\\ SU(3)}}\\, . \\label{simple_Polyakov_loop}\n\\end{eqnarray} \n\n\n\\begin{comment}\n\\com{[I have rewritten the text from here]}\n\nAt high temperature, we expect that color degrees of freedom are released and a perturbative approach becomes valid. \nAs the temperature decreases, confinement-deconfinement phase transition occurs at a certain critical temperature, which has been observed in lattice QCD simulations.\nAn approximated (exact in pure YM) order parameter for confinement-deconfinment phase transition in QCD is the Polyakov loop, which is a Wilson line closing around the imaginary time direction \\cite{Weiss:1980rj, Gies:2000dw},\n\\begin{eqnarray}\n\\Phi (\\vec{x})\n&=& \\frac{1}{N_{c}} {\\rm{Tr}} \\ \\mathcal{T} \\ {\\rm{exp}} \\left\\{ ig \\int^{\\beta}_{0} d\\tau {A}_{0}^{a} (\\tau, \\vec{x}) T^{a} \\right\\}\n\\end{eqnarray}\nwhere $\\beta = 1\/T$ is the inverse temperature. $\\mathcal{T}$ stands for time ordering. $\\hat{A}_{0}^{A}$ is the zeroth component of the background gauge field. The negative logarithm of the expectation value of the Polyakov loop can be identified as a free energy of a static quark. $\\langle \\Phi \\rangle \\to 0$ being an infinite free energy corresponds to a confining phase, while $\\langle \\Phi \\rangle \\neq 0$ indicates a deconfining phase. Under gauge transformations which differ at $x_{0} = 0$ and $x_{0} = \\beta$ by a center element of the gauge group, $\\Phi$ transforms as $\\Phi \\to {\\rm e}^{2\\pi i n\/N_{c}}$, $(n=0,1,2, \\ldots, N_{c}-1)$. This implies that $\\langle \\Phi \\rangle = 0$ indicates a center symmetric phase, while $\\langle \\Phi \\rangle \\neq 0$ a broken phase of the center symmetry. Dynamical quarks, however, explicitly break the center symmetry. Therefore, in QCD the Polyakov loop is an approximated order parameter of confinement(center symmetric)-deconfinement(broken) phase transition. \nIn the Polyakov gauge,\n\\begin{eqnarray}\n\\hat{A}_{0}^{A}(x_{0}, \\vec{x}) = \\bar{\\mathcal{A}}_{0}(x_{0}, \\vec{x}) \\delta^{3 A}, \\ \\ \\ \\partial_{0} \\bar{\\mathcal{A}}_{0} (x_{0}, \\vec{x}) = 0,\n\\end{eqnarray}\nthe Polyakov loop is given as\n\\begin{eqnarray}\n\\Phi(\\vec{x})\n&=& {\\rm{cos}} (\\pi c) \\ \\ {\\rm{for}} \\ SU(2) \\nonumber \\\\\n\\Phi(\\vec{x})\n&=& \\frac{1}{3} \\left( 1 + 2{\\rm{cos}}( \\pi c) \\right) \\ \\ {\\rm{for}} SU(3).\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\nc(\\vec{x}) = \\frac{g \\bar{\\mathcal{A}}_{0}(\\vec{x}) }{ 2 \\pi T }\n\\end{eqnarray}\nIn this study, we consider a homogeneous order parameter $\\bar{\\mathcal{A}}_{0}(\\vec{x}) = \\bar{\\mathcal{A}}_{0}$, where the Polyakov loop is independent of the space coordinate $\\vec{x}$.\n\n\\com{[up to here]}\n\\end{comment}\n\n\\subsection{Yang-Mills part of effective action}\n\nNow, we consider the Yang-Mills part (gluon and ghost contributions) of the one-loop effective action. In the one-loop level, the effect of EM fields is not included in gluon and ghost loops, since these do not directly interact with EM fields. From Eq.~(\\ref{full_actions}), the effective actions of gluon and ghost parts are given, respectively, as\n\\begin{eqnarray}\niS_{\\rm gluon}\n&\\equiv & {\\rm{ln}} \\ {\\rm{det}} \\left[ - ({\\cal D}^{2})^{ac} g_{\\mu \\nu} - 2 g f^{abc} {\\cal F}^{b}_{\\mu \\nu} \\right]^{-\\frac12}, \\label{gluon-part}\\\\\niS_{\\rm ghost}\n&\\equiv & {\\rm{ln}} \\ {\\rm{det}} \\left[ - ({\\cal D}^{2})^{ac} \\right]^{+1}.\n\\label{ghost-part}\n\\end{eqnarray}\nLet us first explore the gluon part (\\ref{gluon-part}). By using the proper time integral,\\footnote{We use the following identity:\n$$\n\\ln (\\hat M -i\\delta)=\\frac{1}{\\epsilon}- \\frac{i^\\epsilon}{\\epsilon \\Gamma(\\epsilon)}\\int_0^\\infty \\frac{ds}{s^{1-\\epsilon}}\\, {\\rm e}^{-is (\\hat M -i\\delta)} $$ \nin the limit $\\epsilon\\to 0$ and $\\delta\\to 0$. We ignore the first divergent term, since it does not depend on the fields.} the gluon part of the effective action can be rewritten in the following form (the limit $\\epsilon,\\delta\\to 0$ is always implicit and should be taken after the calculation): \n\\begin{eqnarray}\niS_{\\rm gluon}\n&=& -\\frac{1}{2} {\\rm{Tr}} \\ {\\rm{ln}} \n\\left[ - ({\\cal D}^{2})^{ac} g_{\\mu \\nu} \n - 2 g f^{abc} {\\cal F}^{b}_{\\mu \\nu} \n\\right] \\nonumber \\\\\n&=& \\int d^{4}x \\frac{i^{\\epsilon}}{2} \n \\sum_{h=1}^{N_{c}^{2}-1} \\int^{\\infty}_{0} \\frac{ds}{s^{1-\\epsilon}} \n {\\rm{tr}} \\langle x | \n {\\rm e}^{- i \\left( - {\\cal D}_{v_{h}}^{2} g_{\\mu \\nu} \n + 2i g v_{h} {\\cal F}_{\\mu \\nu} -i \\delta \\right) s}\n | x \\rangle \\nonumber \\\\\n&=& \\int d^{4}x \\frac{i^{\\epsilon}}{2} \\sum_{h=1}^{N_{c}^{2}-1} \n \\int^{\\infty}_{0} \\frac{ds}{s^{1-\\epsilon}} \n {\\rm e}^{-\\delta s} \n \\left\\{ {\\rm e}^{-i(2gv_{h}\\mathfrak{a} )s} + {\\rm e}^{ -i ( - 2gv_{h}\\mathfrak{a} )s} \n + {\\rm e}^{-i ( igv_{h}\\mathfrak{b} )s } + {\\rm e}^{ -i ( -2igv_{h}\\mathfrak{b} )s } \n \\right\\} \\nonumber \\\\\n&& \\times \\langle x | {\\rm e}^{ - i ( - {\\cal D}_{v_{h}}^{2} ) s} |x \\rangle \\, .\n\\end{eqnarray}\nWhile the capital trace ``Tr\" in the first line is taken with respect to colors, Lorentz indices, and coordinates, ``tr\" in the second line is only for Lorentz indices. Also, in the second line, we have introduced real quantities $v_{h}$ $(h=1,\\ldots, N_c^2-1)$ that are eigenvalues of a Hermitian matrix $V^{ac}\\equiv if^{abc} {n}^{b}$ (i.e., $V^{ac}\\varphi^c=v_h \\varphi^a$), and Lorentz-invariant quantities\n$\\mathfrak{a}$, $\\mathfrak{b}$ defined by\n\\begin{eqnarray}\n\\mathfrak{a}\n\\equiv \\frac{1}{2} \\sqrt{ \\sqrt{ \\mathcal{F}^{4} + (\\mathcal{F}\\cdot \\tilde{\\mathcal{F}})^{2} } + \\mathcal{F}^{2} }\\, , \\ \\ \\ \\ \n\\mathfrak{b}\n\\equiv \\frac{1}{2} \\sqrt{ \\sqrt{ \\mathcal{F}^{4} + (\\mathcal{F}\\cdot \\tilde{\\mathcal{F}})^{2} } - \\mathcal{F}^{2} }\\, ,\n\\end{eqnarray}\nwith the dual field-strength tensor $\\tilde{\\mathcal{F}}^{\\mu \\nu} = \\frac{1}{2} \\epsilon^{\\mu \\nu \\alpha \\beta} \\mathcal{F}_{\\alpha \\beta}$ (or equivalently, by $\\mathfrak{a}^2-\\mathfrak{b}^2=\\frac12 \\mathcal{F}^2$ and $\\mathfrak{a} \\mathfrak{b} = \\frac14 \\mathcal{F}\\cdot \\tilde \\mathcal{F}$).\nThe covariant derivative is defined as ${\\cal D}_{v_{h} \\mu} = \\partial_{\\mu} - ig v_{h} \\mathcal{A}_{\\mu}$. \nThe calculation up to now is in fact the same as in the case at zero temperature which was done in Ref.~\\cite{Ozaki:2013sfa}. \nAt finite temperature, however, one needs to be careful in evaluating the matrix element $\\langle x | {\\rm e}^{ - i ( - {\\cal D}_{v_{h}}^{2} ) s} |x \\rangle$.\n\\begin{comment}\nTaking the Fock-Schwinger gauge in the presence of the order parameter $\\bar{\\mathcal{A}}_{0}$,\n\\begin{eqnarray}\nA_{\\mu} \n&=& i\\bar{\\mathcal{A}}_{0} \\delta_{0\\mu} - \\frac{1}{2} F_{\\mu \\nu} (x - x^{\\prime})^{\\nu},\n\\end{eqnarray}\n\\end{comment}\nNamely, it can be now written as the Matsubara summation:\n\\begin{eqnarray}\n\\langle x | {\\rm e}^{ - i ( - {\\cal D}_{v_{h}}^{2} ) s} |x \\rangle\n&=& \\left. i T \\sum_{n=-\\infty}^{\\infty} \\int \\frac{ d^{3}p }{ (2\\pi)^{3} } \n\\, {\\rm e}^{- p_\\alpha X_{h}^{\\alpha\\beta}(is) p_\\beta }\\, {\\rm e}^{-Y_{h}(is) } \\right|_{p_{0} = igv_{h}\\bar{\\mathcal{A}}_{4} - i 2\\pi n T}\\, ,\n\\label{Matrix-element-T} \n\\end{eqnarray}\nwhere the functions $X_{h}^{\\alpha \\beta}(\\bar{s})$ and $Y_{h}(\\bar{s})$ have been defined as \\cite{Dittrich:2000zu}\n\\begin{eqnarray}\nX_{h}^{\\alpha \\beta}(\\bar{s})\n&=& \\left[(gv_{h}\\mathcal{F} )^{-1} {\\rm{tan}}(gv_{h}\\mathcal{F} \\bar{s})\\right]^{\\alpha\\beta}, \\nonumber \\\\\nY_{h}(\\bar{s})\n&=& \\frac{1}{2} {\\rm{tr}} \\ {\\rm{ln}} \\ {\\rm{cos}}(gv_{h}\\mathcal{F} \\bar{s}).\n\\end{eqnarray}\nIn the presence of the Polyakov loop $\\bar{\\mathcal{A}}_{4}$, the periodic boundary condition of the gluon in the imaginary time direction is modified.\nThen, the Matsubara frequency is shifted by the Polyakov loop as in Eq.~(\\ref{Matrix-element-T}).\nPerforming the three-dimensional momentum integral and applying the Poisson resummation~\\cite{Dittrich:2000zu}, one can obtain the matrix element in terms of \n$\\mathfrak{a}$ and $\\mathfrak{b}$ as\n\\begin{eqnarray}\n\\!\\!\\langle x | {\\rm e}^{ - i ( - {\\cal D}_{v_{h}}^{2} ) s} |x \\rangle\n&=& -\\frac{i}{16 \\pi^{2}} \\frac{ gv_{h}\\mathfrak{a} s}{{\\rm{sin}}(gv_{h}\\mathfrak{a} s)} \\frac{gv_{h}\\mathfrak{b} s}{ {\\rm{sinh}}(gv_{h}\\mathfrak{b} s) } \\left[\n1 + 2\\sum_{n=1}^{\\infty} {\\rm e}^{ i \\frac{\\mathfrak{h}(s)}{4T^{2}}n^{2}} {\\rm{cos}}\\left( \\frac{ gv_{h} \\bar{\\mathcal{A}}_{4}}{T} n \\right) \\right],\n\\label{KarnelG}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\mathfrak{h}(s)\n&=& \\frac{\\mathfrak{b}^{2} - {\\mathfrak{e}}^{2} }{\\mathfrak{a}^{2}+\\mathfrak{b}^{2}}\\, g v_{h} \\mathfrak{a}\\, {\\rm{cot}}(gv_{h} \\mathfrak{a} s) + \\frac{ \\mathfrak{a}^{2} + {\\mathfrak{e}}^{2} }{ \\mathfrak{a}^{2} + \\mathfrak{b}^{2} }\\, g v_{h} \\mathfrak{b}\\, {\\rm{coth}} (gv_{h}\\mathfrak{b} s)\\, ,\n\\end{eqnarray}\nwith\n\\begin{eqnarray} \n{\\mathfrak{e}}^{2} = (u_{\\alpha} \\mathcal{F}^{\\alpha \\mu})( u_{\\beta} \\mathcal{F}^{\\beta}_{\\mu} ).\n\\end{eqnarray}\nThe vector $u^{\\mu}$ is the heat-bath four-vector, which is $(1,0,0,0)$ in the rest frame of the heat bath. The first (second) term in Eq.~(\\ref{KarnelG}) corresponds to the zero-(finite-)temperature contribution. \nThe gluon part of the effective action is then given as\n\\begin{eqnarray}\niS_{\\rm gluon} \n&=& - \\frac{i^{1+\\epsilon}}{ 32 \\pi^{2} } \\int d^{4}x \\sum_{h=1}^{N_{c}^{2}-1} \n\\int^{\\infty}_{0} \\frac{ds}{s^{3-\\epsilon}} {\\rm e}^{-\\delta s} \n\\left\\{ {\\rm e}^{-i(2gv_{h}\\mathfrak{a} )s} + {\\rm e}^{ -i ( - 2gv_{h}\\mathfrak{a} )s} \n + {\\rm e}^{-i ( igv_{h}\\mathfrak{b} )s } + {\\rm e}^{ -i ( -2igv_{h}\\mathfrak{b} )s } \n\\right\\} \\nonumber \\\\\n&&\\qquad\\qquad\\ \\ \\times \\frac{ gv_{h}\\mathfrak{a} s}{{\\rm{sin}}(gv_{h}\\mathfrak{a} s)} \n\\frac{gv_{h}\\mathfrak{b} s}{ {\\rm{sinh}}(gv_{h}\\mathfrak{b} s) } \\left[\n1 + 2\\sum_{n=1}^{\\infty} {\\rm e}^{ i \\frac{\\mathfrak{h}(s)}{4T^{2}}n^{2}} \n{\\rm{cos}}\\left( \\frac{ gv_{h} \\bar{\\mathcal{A}}_{4}}{T} n \\right) \\right].\\label{action_gluon}\n\\end{eqnarray}\nSimilarly, we obtain the ghost part as\n\\begin{eqnarray}\niS_{\\rm ghost}\n&=& \\frac{i^{1+\\epsilon}}{ 32 \\pi^{2} } \\int d^{4}x \\sum_{h=1}^{N_{c}^{2}-1} \n\\int^{\\infty}_{0} \\frac{ds}{s^{3-\\epsilon}} {\\rm e}^{-\\delta s} \n\\left\\{ 2 \\right\\} \\nonumber \\\\\n&& \\times \\frac{ gv_{h}\\mathfrak{a} s}{{\\rm{sin}}(gv_{h}\\mathfrak{a} s)} \n\\frac{gv_{h}\\mathfrak{b} s}{ {\\rm{sinh}}(gv_{h}\\mathfrak{b} s) } \\left[\n1 + 2\\sum_{n=1}^{\\infty} {\\rm e}^{ i \\frac{\\mathfrak{h}(s)}{4T^{2}}n^{2}} \n{\\rm{cos}}\\left( \\frac{ gv_{h} \\bar{\\mathcal{A}}_{4}}{T} n \\right) \\right].\n\\label{action_ghost}\n\\end{eqnarray}\nIn both parts, the first terms in the square brackets are the results at zero temperature and agree with the known results \\cite{Ozaki:2013sfa}. As discussed in detail in Ref.~\\cite{Ozaki:2013sfa}, each term has an ultraviolet (UV) divergence, which, however, can be absorbed by renormalizing the coupling $g$ and fields $\\mathcal{A}_\\mu$ \\cite{Savvidy:1977as, Matinyan:1976mp}. On the other hand, the finite-temperature contributions do not have UV divergence, and thus we do not need an additional renormalization procedure for the finite-temperature contributions. \nWe regard the coupling and fields as renormalized ones and focus on UV-finite pieces in Eqs.~(\\ref{action_gluon}) and (\\ref{action_ghost}).\n\n\nOur results (\\ref{action_gluon}) and (\\ref{action_ghost}) are effective actions for chromo-EM fields as well as the Polyakov loop at finite temperature. These are generalizations of the previous results in two cases. Indeed, if we consider the pure chromo-{\\it electric} background with a Polyakov loop (${\\cal B}=0,\\ {\\cal E}\\neq 0, \\ \\mathcal{A}_0\\neq 0$), we find $\\mathfrak{a} \\to i{\\cal E}$, $\\mathfrak{b} \\to 0$ and reproduce Gies's effective action at finite temperature \\cite{Gies:2000dw}. Moreover, in the case of the pure chromo-{\\it magnetic} background (${\\cal E}=0,\\ {\\cal B}\\neq 0$, $\\bar{\\mathcal{A}}_{4}=0$), we find $\\mathfrak{a} \\to {\\cal B}$, $\\mathfrak{b} \\to 0$ and reproduce the results obtained in Refs.~\\cite{Dittrich:1980nh, Kapusta:1981nf}. \n\n\n\\begin{comment}\n\\com{[Below has been rewritten]}\n\nIn the vacuum part, there is a UV divergence.\nHowever, the UV divergence can be renormalized in the coupling and fields \\cite{Savvidy:1977as, Matinyan:1976mp}.\nOn the other hand, in the finite temperature part, there is no UV divergence and thus we do not need an additional renormalization procedure for the finite temperature part. If we consider the pure chromo-electric background: $\\mathfrak{a} \\to iE_{c}$ and $\\mathfrak{b} \\to 0$, we reproduce Gies' effective action at finite temperature \\cite{Gies:2000dw}.\nOn the other hand, in the case of the pure chromo-magnetic background: $\\mathfrak{a} \\to H_{c}$ and $\\mathfrak{b} \\to 0$ with $\\bar{\\mathcal{A}}_{4}=0$, we reproduce the results obtained in \\cite{Dittrich:1980nh, Kapusta:1981nf}.\n\n\\com{[up to here]}\n\\end{comment}\n\n\\subsection{Quark part of effective action}\n\n\nFor the quark part of the effective action, we follow basically the same procedures as in the Yang-Mills part.\nFrom the functional integral (\\ref{full_actions}), the quark part of the one-loop effective action reads\n\\begin{eqnarray}\niS_{\\rm quark}\n&=& {\\rm{ln}} \\ {\\rm{det}} \\left[ i \\gamma_{\\mu} \\hat{\\cal D}^{\\mu} - M_{q} \\right].\n\\end{eqnarray}\nUtilizing the proper time integral, we evaluate the effective action as\n\\begin{eqnarray}\niS_{\\rm quark}\n&=& {\\rm{Tr}} \\ {\\rm{ln}} \\left[ i \\gamma_{\\mu} \\hat{\\cal D}^{\\mu} - M_{q} \\right] \\nonumber \\\\\n&=& -\\int dx^{4} \\frac{i^{\\epsilon}}{2} \\sum_{i=1}^{N_{c}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{ds}{s^{1-\\epsilon}} \n{\\rm e}^{-i (m_{q_{f}}^{2} - i \\delta ) s} {\\rm{tr}} \\langle x | \n{\\rm e}^{-is \\left( -\\mathbb{D}_{i,f}^{2} - \\frac{1}{2} \\sigma \\cdot \\mathbb{F}_{i,f} \\right) } \n|x \\rangle,\n\\end{eqnarray}\nwhere $\\mathbb{D}_{i,f}^{\\mu} = \\partial^{\\mu} - i \\mathbb{A}_{i,f}^{\\mu}$ with the field $\\mathbb{A}_{i,f}^{\\mu}$ being a linear combination of the gluon field $\\mathcal{A}_{\\mu}$ and the photon field $a^{\\mu}$ as \n\\begin{eqnarray}\n\\mathbb{A}_{i,f}^{\\mu}\n&=& g\\omega_{i} \\mathcal{A}^{\\mu} + eQ_{q_{f}} a^{\\mu}. \\label{linear_combination}\n\\end{eqnarray}\nThis covariant derivative $\\mathbb{D}_{i,f}^{\\mu}$ can be obtained from $\\hat{\\cal D}^{\\mu}$ defined in Eq. (\\ref{CovariantDerivativeQuark}) with the covariantly constant field employed as the background field.\nHere $\\omega_{i}\\ (i=1,\\ldots,N_c)$ are eigenvalues of an $N_c\\times N_c$ matrix ${n}^{a} T^{a}$ and satisfy\\footnote{Let $\\Omega$ be a diagonal matrix with eigenvalues $\\omega_i$, i.e., $\\Omega={\\rm diag}(\\omega_1,\\ldots,\\omega_{N_c})=Un^aT^aU^\\dagger$. Then, $\\sum_{i=1}^{N_c}\\omega_i=\\, {\\rm tr}\\, \\Omega= n^a\\, {\\rm tr}\\, T^a=0$ and $\\sum_{i=1}^{N_c}\\omega_i^2=\\, {\\rm tr}\\, \\Omega^2=\\, {\\rm tr}\\, (T^aT^b)n^a n^b=1\/2.$ } $\\sum_{i=1}^{N_c}\\omega_i=0$ and $\\sum_{i=1}^{N_c}\\omega_i^2=1\/2$.\nThe field-strength tensor $\\mathbb{F}_{i,f}^{\\mu \\nu}$ can be expressed in terms of constant chromo-EM fields $\\vec{\\mathcal{E}}$, $\\vec{\\mathcal B}$, and EM fields $\\vec{E}$, $\\vec{B}$ as [with the notation $\\vec{V}=(V_x,V_y,V_z)$]\n\\begin{eqnarray}\n\\mathbb{F}_{i,f}^{\\mu \\nu}\n&=& g \\omega_{i} {\\cal F}^{\\mu \\nu} + eQ_{q_{f}} f^{\\mu \\nu} \\nonumber \\\\\n&=&\ng \\omega_{i} \\left(\n\\begin{array}{cccc}\n0 & \\mathcal{E}_{x} & \\mathcal{E}_{y} & \\mathcal{E}_{ z } \\\\\n-\\mathcal{E}_{x} & 0 & \\mathcal{B}_{z} & - \\mathcal{B}_{y} \\\\\n-\\mathcal{E}_{y} & - \\mathcal{B}_{z} & 0 & \\mathcal{B}_{x} \\\\\n-\\mathcal{E}_{z} & \\mathcal{B}_{y} & - \\mathcal{B}_{x} & 0 \\\\\n\\end{array}\n\\right)\n+\ne Q_{q_{f}} \\left(\n\\begin{array}{cccc}\n0 & E_{x} & E_{y} & E_{z } \\\\\n-E_{x} & 0 & B_{z} & - B_{y} \\\\\n-E_{y} & - B_{z} & 0 & B_{x} \\\\\n-E_{z} & B_{y} & - B_{x} & 0 \\\\\n\\end{array}\n\\right).\n\\label{matrix_field}\n\\end{eqnarray}\nThe eigenvalues of the field-strength tensor $\\mathbb{F}^{\\mu \\nu}_{i,f}$ are given by $\\pm i\\mathfrak{a}_{i,f}$ and $\\pm \\mathfrak{b}_{i,f}$\nwith\n\\begin{eqnarray}\n\\mathfrak{a}_{i,f}\n = \\frac{1}{2} \\sqrt{ \\sqrt{ \\mathbb{F}_{i,f}^{4} + ( \\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f} )^{2} } + \\mathbb{F}_{i,f}^{2} } \\ , \\ \\ \\ \\ \\ \n\\mathfrak{b}_{i,f}\n = \\frac{1}{2} \\sqrt{ \\sqrt{ \\mathbb{F}_{i,f}^{4} + ( \\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f} )^{2} } - \\mathbb{F}_{i,f}^{2} } \\ .\n\\end{eqnarray}\nThe dual field-strength tensor $\\tilde{\\mathbb{F}}^{\\mu \\nu}_{i,f}$ is defined as $\\tilde{\\mathbb{F}}^{\\mu \\nu}_{i,f} = \\frac{1}{2} \\epsilon^{\\mu \\nu \\alpha \\beta} \\mathbb{F}_{i,f \\alpha \\beta}$. By using Eq.~(\\ref{matrix_field}), $\\mathbb{F}_{i,f}^{2}=2(\\mathfrak{a}_{i,f}^2-\\mathfrak{b}_{i,f}^2)$ and $\\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f}=4\\mathfrak{a}_{i,f}\\mathfrak{b}_{i,f}$ can be expressed in terms of chromo-EM fields and EM fields as\n\\begin{comment}\n\\begin{eqnarray}\n\\mathbb{F}_{i,f}^{2}\n&=& 2 \\left[ (g \\omega_{i})^{2}( \\vec{\\mathcal{B}}^{2} - \\vec{\\mathcal{E}}^{2} ) + (eQ_{q_{f}})^{2} ( \\vec{B}^{2} - \\vec{E}^{2} ) + 2 g \\omega_{i}eQ_{q_{f}}( \\vec{\\mathcal{B}} \\cdot \\vec{B} - \\vec{\\mathcal{E}} \\cdot \\vec{E}) \\right], \\nonumber \\\\\n\\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f}\n&=& -4 \\left[ (g \\omega_{i})^{2} \\vec{\\mathcal{E}}\\cdot \\vec{\\mathcal{B}} + (eQ_{q_{f}})^{2} \\vec{E}\\cdot \\vec{B} + g \\omega_{i} eQ_{q_{f}} ( \\vec{\\mathcal{E}} \\cdot \\vec{B} + \\vec{E} \\cdot \\vec{\\mathcal{B}} ) \\right]. \\label{FFtilde}\n\\end{eqnarray}\n\\end{comment}\n\\begin{eqnarray}\n\\mathbb{F}_{i,f}^{2}\n&=& 2( \\vec{ \\mathcal{B} }_{i,f}^{2} - \\vec{ \\mathcal{E} }_{i,f}^{2} ), \\nonumber \\\\\n\\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f}\n&=& -4 \\vec{ \\mathcal{E} }_{i,f} \\cdot \\vec{ \\mathcal{B} }_{i,f},\n\\label{FFtilde}\n\\end{eqnarray}\nwhere we have defined the combined electromagnetic fields as $\\vec{ \\mathcal{E} }_{i,f} = g \\omega_{i} \\vec{ \\mathcal{E} } + eQ_{q_{f}} \\vec{E}$ and $\\vec{ \\mathcal{B} }_{i,f} = g \\omega_{i} \\vec{ \\mathcal{B} } + eQ_{q_{f}} \\vec{ B}$. \nTaking the trace of the matrix $\\langle x | {\\rm e}^{-is \\left( -\\mathbb{D}_{i,f}^{2} - \\frac{1}{2} \\sigma \\cdot \\mathbb{F}_{i,f} \\right) } |x \\rangle$ at finite temperature, we get\n\\begin{eqnarray}\n&&{\\rm{tr}} \\langle x | {\\rm e}^{-is \\left( -\\mathbb{D}_{i,f}^{2} - \\frac{1}{2} \\sigma \\cdot \\mathbb{F}_{i,f} \\right) } |x \\rangle \\nonumber \\\\\n&&\\qquad = \\left. i T \\sum_{n=-\\infty}^{\\infty} \\int \\frac{ d^{3} p }{ (2\\pi)^{3} }\\, {\\rm e}^{ - p_\\alpha \\mathbb{X}^{\\alpha\\beta}_{i,f}(is) p_\\beta } {\\rm e}^{- \\mathbb{Y}_{i,f}(is) }\\, {\\rm{tr}} \\, {\\rm e}^{ \\frac{i}{2} \\sigma \\cdot \\mathbb{F}_{i,f}s } \\right|_{p_{0} = ig \\omega_{i} \\bar{\\mathcal{A}}_{4} - i \\pi (2n+1) T}\\, .\n\\label{matrix_quark}\n\\end{eqnarray}\nHere, the functions $\\mathbb{X}_{i,f}^{\\alpha \\beta} (\\bar{s})$ and $\\mathbb{Y}_{i,f} (\\bar{s})$ have been defined as~\\cite{Dittrich:2000zu}\n\\begin{eqnarray}\n\\mathbb{X}_{i,f}^{\\alpha \\beta} (\\bar{s})\n&=& \\left[ \\mathbb{F}_{i,f}^{-1}\\, {\\rm{tan}} ( \\mathbb{F}_{i,f} \\bar{s} ) \\right]^{\\alpha \\beta}, \\nonumber \\\\\n\\mathbb{Y}_{i,f} (\\bar{s})\n&=& \\frac{1}{2} {\\rm{tr}} \\ {\\rm{ln}} \\ {\\rm{cos}} ( \\mathbb{F}_{i,f} \\bar{s} )\\, .\n\\end{eqnarray} \nIn the presence of the Polyakov loop $\\bar{\\mathcal{A}}_{4}$, the antiperiodic boundary condition for the quark is also modified. Then, the temporal component of the four-momentum vector has been replaced by the Polyakov loop and the Matsubara frequency for a fermion in Eq.~(\\ref{matrix_quark}). \nThe third part, ${\\rm{tr}}\\, {\\rm e}^{ \\frac{i}{2} \\sigma \\cdot \\mathbb{F}_{i,f} s}$, is common with the case at zero temperature and was computed in Ref.~\\cite{Ozaki:2013sfa}. The result is\n\\begin{eqnarray}\n{\\rm{tr}} \\, {\\rm{exp}} \\left( \\frac{i}{2} \\sigma \\cdot \\mathbb{F}_{i,f} s \\right)\n&=& 4 {\\rm{cos}}( \\mathfrak{a}_{i,f}s ) {\\rm{cosh}} (\\mathfrak{b}_{i,f} s).\n\\end{eqnarray}\nNow, performing the three-dimensional momentum integral and using the Poisson resummation, we find from Eq.~(\\ref{matrix_quark})\n\\begin{eqnarray}\n{\\rm{tr}} \\langle x | {\\rm e}^{-is \\left( -\\mathbb{D}_{i,f}^{2} - \\frac{1}{2} \\sigma \\cdot \\mathbb{F}_{i,f} \\right) } |x \\rangle \n&=& - \\frac{ i }{ 4 \\pi^{2} s^{2} } \\frac{ ( \\mathfrak{a}_{i,f}s )( \\mathfrak{b}_{i,f}s ) }{ {\\rm{sin}}( \\mathfrak{a}_{i,f} s ) {\\rm{sinh}} ( \\mathfrak{b}_{i,f} s ) } {\\rm{cos}}( \\mathfrak{a}_{i,f}s ) {\\rm{cosh}}( \\mathfrak{b}_{i,f}s ) \\nonumber \\\\\n&& \\times \\left\\{ 1 + 2 \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{ \\frac{i}{4T^{2}} \\mathfrak{h}_{i,f}(s) n^{2} } {\\rm{cos}} \\left( \\frac{ g \\omega_{i}\\bar{\\mathcal{A}}_{4} n }{ T} \\right) \\right\\} \\, ,\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\mathfrak{h}_{i,f}(s)\n&=& \\frac{ \\mathfrak{b}_{i,f}^{2} - {\\mathfrak{e}}_{i,f}^{2} }{\\mathfrak{a}_{i,f}^{2} + \\mathfrak{b}_{i,f}^{2}}\\mathfrak{a}_{i,f} {\\rm{cot}}(\\mathfrak{a}_{i,f}s) + \\frac{ \\mathfrak{a}_{i,f}^{2} + {\\mathfrak{e}}_{i,f}^{2} }{ \\mathfrak{a}_{i,f}^{2} + \\mathfrak{b}_{i,f}^{2} } \\mathfrak{b}_{i,f} {\\rm{coth}}(\\mathfrak{b}_{i,f}s)\\, ,\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\n{\\mathfrak{e}}_{i,f}^{2}\n&=& (u_{\\alpha} \\mathbb{F}_{i,f}^{\\alpha \\mu}) ( u_{\\beta} \\mathbb{F}^{\\ \\beta}_{i,f \\mu})\\, .\n\\end{eqnarray}\nIn the heat-bath rest frame, we have $u^{\\mu} = (1,0,0,0)$ and then\n$\n{\\mathfrak{e}}_{i,f}^{2} = \\vec{ \\mathcal{E} }_{i,f}^{2} = ( g \\omega_{i} \\vec{\\mathcal{E}} + eQ_{q_{f}} \\vec{E} )^{2}\n$.\nTherefore, the quark part of the one-loop effective action reads\n\\begin{eqnarray} \niS_{\\rm quark}\n&=& \\frac{i^{1+\\epsilon}}{8\\pi^{2}} \\int d^{4}x \\sum_{i=1}^{N_{c}} \\sum_{f=1}^{N_{f}} \\int_{0}^{\\infty} \\frac{ds}{s^{3-\\epsilon}} \n{\\rm e}^{-i(m_{q_f}^{2}-i\\delta)s} (\\mathfrak{a}_{i,f}s)(\\mathfrak{b}_{i,f}s) {\\rm{cot}}(\\mathfrak{a}_{i,f}s) {\\rm{coth}}(\\mathfrak{b}_{i,f}s) \\nonumber \\\\\n&&\\qquad \\times \\left[ 1 + 2 \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{\\frac{i}{4T^{2}} \\mathfrak{h}_{i,f}(s) n^{2} } {\\rm{cos}}\\left( \\frac{ g\\omega_{i} \\bar{\\mathcal{A}}_{4}n}{T} \\right) \\right]\\, .\n\\label{action_quark}\n\\end{eqnarray}\nAs in the YM part, the first (second) term corresponds to the zero-(finite-)temperature contribution. The zero-temperature contribution agrees with the previous result obtained in Ref.~\\cite{Ozaki:2013sfa}. \n\nAgain, the first term contains UV divergences. \nThese divergences have two origins: QCD and QED \\cite{Ozaki:2013sfa}. \nThis is because the resummed quark one-loop diagrams contain contributions from the diagrams with only two EM field insertions (QED) and only two chromo-EM field insertions (QCD).\nThe UV divergence coming from purely QCD dynamics is additive to the one which we encounter in the YM part. Then, we can absorb all the UV divergences by renormalizing the coupling $g,e$ and fields $\\mathcal{A}_{\\mu}, a_{\\mu}$. \nFrom the renormalization procedure at zero temperature, we have obtained the correct beta functions of both QCD and QED in Ref.~\\cite{Ozaki:2013sfa}. The sum of the three parts (\\ref{action_gluon}), (\\ref{action_ghost}), and (\\ref{action_quark}) may be called the Euler-Heisenberg-Weiss action in QCD+QED at finite temperature. This result can be applied to several systems where strong EM fields and chromo-EM fields coexist at zero and finite temperatures. In the next section, we will show some applications of our effective actions.\\\\\n\n\n\n\\section{Applications of Euler-Heisenberg-Weiss action in QCD+QED}\n\nIn this section we will discuss two applications of our results. \nThe first one is the quark pair production in the presence of both EM and chromo-EM fields. We treat the effective action at zero temperature. The second application is to investigate the effects of EM fields on the effective potential for the Polyakov loop at finite temperature. We will discuss the possible implication for the inverse magnetic catalysis.\n\n\n\n\n\n\\subsection{Quark pair production in QCD+QED fields}\n\nLet us first discuss quark-antiquark pair production in constant QCD+QED fields as an application of our effective action. For this problem, only the quark part (\\ref{action_quark}) is relevant.\n\nIn the early stage of relativistic heavy-ion collisions, extremely strong chromo-EM fields and EM fields could coexist. \nNotice that the strong {\\it{electric}} field in addition to the strong magnetic field could be created on an event-by-event basis~\\cite{Deng:2012pc}.\nThe strength of the chromo-EM fields is approximately of the order of the saturation scale: $|g\\vec{\\mathcal{B}}|, |g\\vec{\\mathcal{E}}| \\sim Q_{s}^2 $, whereas strengths of EM fields would reach the QCD nonperturbative scale $|e\\vec E|, |e\\vec B|\\sim \\Lambda_{QCD}^2$, or even exceed it. Under such strong QCD+QED fields, a number of quark-antiquark pairs must be created through the Schwinger mechanism. The pair-production rate per unit space-time volume can be obtained from the imaginary part of the quark effective Lagrangian at zero-temperature. \nTaking the zero temperature contribution in Eq.~(\\ref{action_quark}), one finds\n\\begin{eqnarray}\n\\mathcal{L}_{\\rm quark}\n= \\frac{ S_{\\rm quark} }{ \\int d^{4}x } \n= \\frac{1}{8\\pi^{2}} \\sum_{i=1}^{N_{c}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{ds}{s^{3}} {\\rm e}^{-is(m_{q_{f}}^{2} - i \\delta) } (\\mathfrak{a}_{i,f}s)(\\mathfrak{b}_{i,f}s) {\\rm{cot}}(\\mathfrak{a}_{i,f}s) {\\rm{coth}}(\\mathfrak{b}_{i,f}s)\\, .\n\\end{eqnarray}\nThis is the same as the result obtained in Ref.~\\cite{Ozaki:2013sfa}. \nThe imaginary part of the effective Lagrangian thus reads\n\\begin{eqnarray}\n{\\Im }m\\, \\mathcal{L}_{\\rm quark}\n&=& - \\frac{1}{8\\pi^{2}} \\sum_{i=1}^{N_{c}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{ds}{s^{3}} {\\rm e}^{- \\delta s } {\\rm{sin}}(m_{q_{f}}^{2} s) \\times (\\mathfrak{a}_{i,f}s)(\\mathfrak{b}_{i,f}s) {\\rm{cot}}(\\mathfrak{a}_{i,f}s) {\\rm{coth}}(\\mathfrak{b}_{i,f}s) \\nonumber \\\\\n&=& \\frac{1}{2 i } \\frac{1}{8 \\pi^{2} } \\sum_{i=1}^{N_{c}^{2}} \\sum_{f=1}^{N_{f}} \\left\\{\n\\int^{0}_{-\\infty} \\frac{ds}{s^{3}}\\, {\\rm e}^{-is(m_{q_{f}}^{2} + i\\delta ) } + \\int^{\\infty}_{0} \\frac{ds}{s^{3}}\\, {\\rm e}^{-is(m_{q_{f}}^{2} - i \\delta ) } \\right\\} \\nonumber \\\\\n&& \\qquad \\times (\\mathfrak{a}_{i,f}s)(\\mathfrak{b}_{i,f}s) {\\rm{cot}}(\\mathfrak{a}_{i,f}s) {\\rm{coth}}(\\mathfrak{b}_{i,f}s)\\, .\n\\end{eqnarray}\nThe integrand has infinitely many poles along the real axis [from cot$( \\mathfrak{a}_{i,f}s)$] and along the imaginary axis [from coth$(\\mathfrak{b}_{i,f}s)$]. With a small positive number $\\delta>0$, the integral contour along the real axis is inclined. Closing the contour in the lower half of the $s$ plane as depicted in Fig. 2 and picking up the poles lying on the imaginary axis $s_{\\rm poles} = - i n \\pi \/ \\mathfrak{b}_{i,f}$, we find\n\\begin{figure}[t]\n\\begin{minipage}{0.8\\hsize}\n\\begin{center}\n\\includegraphics[width=0.8 \\textwidth]{lower_contour.pdf}\n\\vskip -0.1in\n\\end{center}\n\\end{minipage}\n\\caption{\nContour on the complex $s$ plane. The contour along the real axis is inclined by an infinitesimal number $\\delta>0$.}\n\\end{figure}\n\\begin{eqnarray}\n{\\Im}m\\, \\mathcal{L}_{\\rm quark}\n&=& \\frac{1}{8 \\pi^{2}} \\sum_{i=1}^{N_{c}} \\sum_{f=1}^{N_{f}} \\mathfrak{a}_{i,f} \\mathfrak{b}_{i,f} \\sum_{n=1}^{\\infty} \\frac{1}{n}\\, {\\rm e}^{ - \\frac{ m_{q_{f}}^{2} }{\\mathfrak{b}_{i,f}} n\\pi }\n {\\rm{coth}} \\left( \\frac{\\mathfrak{a}_{i,f}}{\\mathfrak{b}_{i,f}} n \\pi \\right).\n\\label{ImLq_full}\n\\end{eqnarray}\nBy using this expression, we can investigate quark-antiquark pair productions under arbitrary configurations of constant chromo-EM and EM fields. \nThe production rate per unit space-time volume is given by \n$\nw_{q\\bar{q}} = 2{\\Im}m\\, \\mathcal{L}_{\\rm quark}.\n$\nWhen we take $N_{c} = N_{f}=1$, $Q=1$, $g\\to 0$, $B\\to 0$ and replace $m_{q} \\to m_{e}$ in Eq.~(\\ref{ImLq_full}), we reproduce the well-known Schwinger formula for the production rate of $e^+e^-$ pairs in an electric field \\cite{Schwinger:1951nm}:\n\\begin{eqnarray}\nw_{e^{+}e^{-}} = 2{\\Im m}\\, \\mathcal{L}_{\\rm EH} \n= \\frac{(eE)^{2}}{4 \\pi^{3}} \\sum_{n=1}^{\\infty} \\frac{1}{n^{2}} {\\rm e}^{ - \\frac{ m_{e}^{2} }{eE} n\\pi },\n\\label{rate_e}\n\\end{eqnarray}\nas we expected.\nOn the other hand, in the pure chromo-electric field case, we obtain the same formula for quark productions derived by G.C.~Nayak \\cite{Nayak:2005pf}.\n\n\n\\begin{comment}\n\\com{[I am not sure if we should discuss the following from here]}\n\nFor small quark masses limit, the imaginary part of the effective Lagrangian has a logarithmic dependence on the masses as\n\\begin{eqnarray}\n{\\rm{Im}}\\, \\mathcal{L}_{\\rm quark}\n&\\sim& \\frac{1}{8 \\pi^{2}} \\sum_{a=1}^{N_{c}} \\sum_{i=1}^{N_{f}} a_{A,i} b_{A,i} {\\rm{ln}} \\left( \\frac{ b_{A,i} }{ m_{q_{i}}^{2} \\pi } \\right).\n\\end{eqnarray}\nSimilar logarithmic dependences are also discussed in \\cite{Hidaka:2011dp, Hidaka:2011fa, Hashimoto:2014dza} for QED case.\n\\com{[up to here]}\n\\end{comment}\n\n\n\\subsubsection{Quark pair production in purely electric background}\n\nFirst, we shall consider quark pair production in a purely electric background with vanishing magnetic fields: $\\vec{B}, \\vec{\\mathcal{B}} \\to 0$.\nIn this case, the production rate for $q\\bar q$ pairs of flavor $f$ becomes\n\\begin{eqnarray}\nw_{q_f \\bar q_f}\n&=& \\frac{1}{4 \\pi^{3}} \\sum_{i=1}^{N_{c}} \\mathfrak{b}_{i,f}^{2} \\sum_{n=1}^{\\infty} \\frac{1}{n^{2}}\\, {\\rm e}^{- \\frac{ m_{q_{f}}^{2}}{\\mathfrak{b}_{i,f}} n \\pi}, \n\\label{EcEformula}\n\\end{eqnarray}\nwhere \n$\\mathfrak{b}_{i,f}= \\sqrt{ \\vec{ \\mathcal{E} }_{i,f}^{2} } = \\sqrt{ (g \\omega_{i})^{2} \\mathcal{E}^{2} \n + (eQ_{q_{f}})^{2} E^{2} \n + 2g \\omega_{i}eQ_{q_{f}} \\mathcal{E} E \n {\\rm{cos}}\\theta_{\\mathcal{E}E} \n }$,\nwith \n$E = \\sqrt{ \\vec{E}^{2} }$, $\\mathcal{E} = \\sqrt{ \\vec{ \\mathcal{E} }^{2} }$,\nand $\\theta_{\\mathcal{E}E}$ being the angle between $\\vec{E}$ and $\\vec{\\mathcal{E}}$. \nFor $N_{c} = 3$, the eigenvalues $\\omega_{i}$ are given by $\\omega_{1} = 1\/2$, $\\omega_{2} = -1\/2$, and $\\omega_{3}=0$. \nRecall that a factor $g\\omega_i$ plays the role of an effective coupling between the chromo-EM field and quarks [see Eq.~(\\ref{linear_combination})]. Thus, a quark (or an antiquark) with $\\omega_3=0$ does not interact with the chromo-EM field in this representation. Still, since there is always a coupling with the EM fields, $q\\bar q$ production with $\\omega_3=0$ is possible due to electric fields, i.e., $\\mathfrak{b}_{i=3,f}=|eQ_{q_f} E|\\neq 0$. \n\nLet us see the dependences of production rates on the quark mass $m_q$ and the angle $\\theta_{\\mathcal{E}E}$. We first consider the case with light quark masses $m_{q_f}^2\\ll \\mathfrak{b}_{i,f}$. \nThe left panel of Fig. 3 shows the light (up) quark production rate with $m_{q}=5$ MeV and $Q_{q}=+2\/3$. \nThe chromo-electric field is fixed to $g\\mathcal{E}=1$~GeV$^{2}$, which is a typical value realized in heavy-ion collisions at RHIC and LHC, while we take several values of strength for the $E$ field. The production rate increases with increasing $E$ field, which is an expected behavior of the usual Schwinger mechanism, but it does not show dependence on the angle $\\theta_{{\\cal E}E}$, while $\\mathfrak{b}_{i,f}$ certainly depends on $\\theta_{{\\cal E}E}$.\nThis unexpected behavior can be understood as follows:\nWhen the quark mass is small enough, $m_{q}^2 \\ll \\mathfrak{b}_{i,f}$, we can approximate the production rate as\n\\begin{eqnarray}\nw_{q_f\\bar q_f\n\\sim \\frac{1}{4 \\pi^{3}} \\sum_{i=1}^{N_{c}} \\mathfrak{b}_{i}^{2} \\sum_{n=1}^{\\infty} \\frac{1}{n^{2}} = \\frac{1}{4\\pi^{3}} \\left\\{ \\frac{ (g\\mathcal{E})^{2} }{2} + N_{c}(eQ_{q}E)^{2} \\right\\} \\zeta(2)\\, ,\n\\end{eqnarray}\nwhere $\\zeta(2) = \\pi^{2}\/6$ and $\\mathfrak{b}_{i} = \\sqrt{ (g \\omega_{i})^{2} \\mathcal{E}^{2} + (eQ_{q})^{2} E^{2} + 2g \\omega_{i}eQ_{q} \\mathcal{E} E {\\rm{cos}}\\theta_{\\mathcal{E}E} }$.\nNotice that the angle dependence in $\\mathfrak{b}_i$ drops out thanks to the relations $\\sum_{i=1}^{N_{c}} \\omega_{i}^{2} = 1\/2$ and $\\sum_{i=1}^{N_{c}} \\omega_{i} = 0$.\nTherefore, the production rate is independent of the angle $\\theta_{\\mathcal{E}E}$. \n\n\n\nWe next discuss the production of heavy quark-antiquark pairs. Since the heavy quark limit just implies that the pair creation does not occur, we consider the case where quark masses are comparable to the background field $m_q^2 \\sim \\mathfrak{b}_{i,f}$. This is realized for charm quarks if we again take the typical value of the chromo-electric field $g{\\cal E}=1~$GeV$^2$. For $m_{c}=1.25$~GeV and $Q_{q}= Q_{\\rm charm} = +2\/3$, the production rate of a charm quark pair is shown in the right panel of Fig.~3. This time, while the production rate becomes small, one can see a clear dependence on the angle $\\theta_{{\\cal E}E}$. Both effects (small production rate and angle dependence) come from the exponential factor in Eq.~(\\ref{EcEformula}).\nIn particular when the electric field is parallel (or antiparallel) to the chromo-electric field, the production rate has a maximum.\nSince the exponential factor is very sensitive to the change of $\\mathfrak{b}_{i,f}$, the rate is largely enhanced at $\\theta_{{\\cal E}E}= 0, \\pi$. \nSymmetric shape of the angle dependence with respect to $\\theta_{{\\cal E}E}=\\pi\/2$ is not so trivial. Notice that the effective field strengths of the combined field at $\\theta_{{\\cal E}E}= 0$ and $\\pi$ are not equivalent for a fixed value of $i$; namely, it is the strongest for the parallel configuration (for $\\omega_i>0$) $\\mathfrak{b}_{i,{\\rm charm}}(\\theta_{{\\cal E}E}= 0)=\\sqrt{ (g \\omega_{i})^{2} \\mathcal{E}^{2} \n + (eQ_{\\rm charm})^{2} E^{2} \n + 2g \\omega_{i}eQ_{\\rm charm} \\mathcal{E} E \n }$ and the weakest for the antiparallel configuration\n$\\mathfrak{b}_{i,{\\rm charm}}(\\theta_{{\\cal E}E}= \\pi)=\\sqrt{ (g \\omega_{i})^{2} \\mathcal{E}^{2} \n + (eQ_{\\rm charm})^{2} E^{2} \n - 2g \\omega_{i}eQ_{\\rm charm} \\mathcal{E} E \n }$, implying that pair production is most enhanced for the parallel configuration. This is true for any index of $i$ giving a positive eigenvalue $\\omega_{i} > 0$. However, this eigenvalue appears with a partner $\\omega_{j}$ having an opposite sign $\\omega_j=-\\omega_i$ [for SU(3) we have $\\omega_1=-\\omega_2=1\/2$], and the antiparallel configuration gives the strongest effective field for the index $j$, $\\mathfrak{b}_{j,{\\rm charm}}(\\theta_{{\\cal E}E}=\\pi)=\\mathfrak{b}_{i,{\\rm charm}}(\\theta_{{\\cal E}E}=0)$. Therefore, after summing over all the pairwise modes $i$, we obtain the angle dependence symmetric with respect to $\\theta_{{\\cal E}E}=\\pi\/2$.\n \n \n\n\n\\begin{figure*}[t]\n\\begin{tabular}{cc}\n\\begin{minipage}{0.55\\hsize}\n\\includegraphics[width=0.8 \\textwidth, bb = 160 50 750 600]{eE-and-angle-dep_2ImLq_only_EcE_ver2.pdf}\n\\end{minipage}\n\\begin{minipage}{0.55\\hsize}\n\\includegraphics[width=0.8 \\textwidth, bb = 160 50 750 600]{eE-and-angle-dep_charm-prod_only_EcE_ver2.pdf}\n\\end{minipage}\n\\end{tabular}\n\\caption{ Quark production rate as a function of the angle $\\theta_{E_{\\rm{chro}}E}$, which stands for $\\theta_{\\mathcal{E}E}$.\nThe left panel is the light (up) quark production rate, while the right panel is the heavy (charm) quark production rate.\nThe chromo-electric field is fixed as $g\\mathcal{E} = 1$ GeV$^{2}$. \n}\n\\end{figure*}\n\n\n\n\n\\subsubsection{Quark pair production in purely chromo-EM background}\n\nNext, we investigate quark pair production under chromo-EM fields in the absence of EM fields. \nLorentz-invariant quantities $\\mathbb{F}_{i,f}^{2}$ and $\\mathbb{F}_{i,f}\\cdot \\tilde{\\mathbb{F}}_{i,f}$ are now explicitly given as [see Eq.~(\\ref{FFtilde})]\n\\begin{eqnarray}\n\\mathbb{F}_{i,f}^{2} = 2 (g \\omega_{i})^{2} (\\mathcal{B}^{2} - \\mathcal{E}^{2} )\\, , \\ \\ \\ \\ \\ \\ \n\\mathbb{F}_{i,f}\\cdot \\tilde{\\mathbb{F}}_{i,f} = -4 (g \\omega_{i})^{2} \\mathcal{E} \\mathcal{B} {\\rm{cos}}\\, \\theta_{\\mathcal{E} \\mathcal{B} }\\, ,\n\\end{eqnarray}\nwhere $\\mathcal{B} = \\sqrt{ \\vec{ \\mathcal{B} }^{2} }$, and $\\theta_{\\mathcal{E} \\mathcal{B}}$ stands for the angle between $\\vec{\\mathcal{E}}$ and $\\vec{\\mathcal{B}}$. When $\\theta_{\\mathcal{E} \\mathcal{B} } = \\pm \\pi\/2$ and $\\mathcal{E} > \\mathcal{B}$, we can move into a system with pure chromo-electric fields with\n$\n\\mathfrak{a}_{i,f} = \\mathfrak{a}_{i} = 0 $ and\n$\\mathfrak{b}_{i,f} = \\mathfrak{b}_{i} = |g \\omega_{i}| \\sqrt{\\mathcal{E}^{2} - \\mathcal{B}^{2} }$ by the Lorentz transformation. Then, the production rate for a certain flavor of quark becomes\n\\begin{eqnarray}\n2 {\\Im m}\\, \\mathcal{L}_{\\rm quark}\n&=& \\frac{1}{4\\pi^{3}} \\sum_{i=1}^{N_{c}} \\mathfrak{b}_{i}^{2} \\sum_{n=1}^{\\infty} \\frac{1}{n^{2}} \\, {\\rm e}^{- \\frac{ m_{q}^{2} }{\\mathfrak{b}_{i}} n \\pi },\n\\end{eqnarray}\nwhich decreases as $\\mathcal{B}$ increases. \nFurthermore, for $\\mathcal{B} \\ge \\mathcal{E}$ the production rate vanishes since in this case the system is equivalent to the pure chromo-magnetic field system.\nWhen $\\theta_{\\mathcal{E} \\mathcal{B}} = 0, \\pi$, which would be relevant configurations for relativistic heavy-ion collisions, $\\mathfrak{a}_{i}$ and $\\mathfrak{b}_{i}$ become\n$\n\\mathfrak{a}_{i} = | g \\omega_{i} \\mathcal{B}|$, $\\mathfrak{b}_{i} = |g \\omega_{i} \\mathcal{E}|$.\nThen, the production rate reads\n\\begin{eqnarray}\n2 {\\Im m}\\, \\mathcal{L}_{\\rm quark}\n&=& \\frac{1}{4\\pi^{2}} \\sum_{i=1}^{N_{c}} | g \\omega_{i} \\mathcal{B} | | g \\omega_{i} \\mathcal{E}| \\sum_{n=1}^{\\infty} \\frac{1}{n}\\, {\\rm e}^{ - \\frac{ m_{q}^{2} }{ | g \\omega_{i} \\mathcal{E} | } n \\pi }\n{\\rm{coth}} \\left( \\frac{ \\mathcal{B} }{ \\mathcal{E} } n \\pi \\right).\n\\label{chromoEB}\n\\end{eqnarray}\nThis production rate is the same result as obtained in Refs.~\\cite{Suganuma:1991ha, Tanji:2008ku}.\nIt increases as either the chromo-electric field or the chromo-magnetic field increases.\nFigure~4 shows $\\theta_{\\mathcal{E} \\mathcal{B}}$ dependence of the light quark production rate with a fixed value of the chromo-electric field, $g\\mathcal{E}=1$ GeV$^{2}$.\nThe maxima appear when the chromo-magnetic field is parallel (or antiparallel) to the chromo-electric field.\n\n\n\\begin{figure}\n\\begin{minipage}{0.8\\hsize}\n\\begin{center}\n\\includegraphics[width=0.8 \\textwidth]{angle-dep_2ImLq_EcHc_ver2.pdf}\n\\vskip -0.1in\n\\end{center}\n\\end{minipage}\n\\caption{\nLight (up) quark production rate as a function of $\\theta_{E_{\\rm{chro}}B_{\\rm{chro}}}$, which stands for $\\theta_{\\mathcal{E}\\mathcal{B}}$ with vanishing electromagnetic fields.\nWe take the strength of the chromo-electric field as $g\\mathcal{E} = 1$ GeV$^{2}$. \n}\n\\end{figure}\n\n\n\\subsubsection{Quark pair production in a glasma with EM fields}\n\nNow we shall consider a specific configuration of chromo-EM fields that are relevant for relativistic heavy-ion collisions accompanied by EM fields. Suppose that the chromo-electric field and the chromo-magnetic field are parallel to each other, $\\vec{\\mathcal{B}} \\parallel \\vec{\\mathcal{E}}$, and that these strengths are approximately equal to the saturation scale: $|g\\vec{\\mathcal{B}}| = |g\\vec{\\mathcal{E}}| = 1$~GeV$^{2} \\sim Q_{s}^2$. \nThis configuration of chromo-EM fields is indeed realized at the very early stage of the glasma evolution.\nUnder this condition, we investigate light (up) quark productions with $m_{q}=0.5$ MeV and $Q_{q} = +2\/3$.\n\\begin{comment}\nWith vanishing electromagnetic fields, the quark production rate is\n\\begin{eqnarray}\n2 {\\rm{Im}} \\mathcal{L}_{q}\n&=& 0.111 \\ [{\\rm{GeV}}^{4}].\n\\end{eqnarray}\nComparing to electron production rate (\\ref{rate_e}) with the critical electric field $eE = eE_{c} = m_{e}^{2}$, we get\n\\begin{eqnarray}\nw_{q\\bar{q}} \/ w_{e^{+}e^{-}} \\sim 4.20 \\times 10^{15}.\n\\end{eqnarray}\nThis indicates a huge production rate can be obtained in chromoelectromagnetic fields created in relativistic heavy ion collisions.\n\\end{comment}\nLet us turn on the EM fields. \nIn the heavy-ion collisions, the dominant EM field is the magnetic field perpendicular to the beam direction (equivalent to the direction of the glasma fields). But here we consider the case $|e\\vec{B}| \\neq 0$ and $|e\\vec{E}|=0$, with arbitrary orientation. Then, the quantities $\\mathbb{F}_{i,f}^{2}$, and $\\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f}$ read [see Eq.~(\\ref{FFtilde})]\n\\begin{eqnarray}\n\\mathbb{F}_{i,f}^{2}\n&=& 2 \\left[ (eQ_{q})^{2} B^{2} + 2 g \\omega_{i} eQ_{q} \\mathcal{B}B {\\rm{cos}} \\theta_{\\mathcal{B}B} \\right], \\nonumber \\\\\n\\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f}\n&=& -4 \\left[ (g \\omega_{i})^{2} \\mathcal{E} \\mathcal{B} + g \\omega_{i} eQ_{q} \\mathcal{E} B {\\rm{cos}} \\theta_{\\mathcal{B}B} \\right],\n\\end{eqnarray} \nwith $B = \\sqrt{ \\vec{B}^{2} }$. Here we have used the fact that ${\\rm{cos}} \\theta_{\\mathcal{E} B} = {\\rm{cos}} \\theta_{\\mathcal{B}B}$. \nNote that in the case of antiparallel configuration of $\\vec{\\mathcal{B}}$ and $\\vec{\\mathcal{E}}$, results are the same as those of the parallel case, since this changes $\\mathbb{F}_{i,f}\\cdot \\tilde{\\mathbb{F}}_{i,f} \\to - \\mathbb{F}_{i,f}\\cdot \\tilde{\\mathbb{F}}_{i,f}$, but it is squared in $\\mathfrak{a}_{i,f}$ and $\\mathfrak{b}_{i,f}$. \n\nFigure~5 shows the quark production rate as a function of the angle $\\theta_{\\mathcal{B}B}$ with several strengths of the magnetic field.\nAt the angle relevant for relativistic heavy-ion collisions, $\\theta_{\\mathcal{B} B} = \\pi\/2,$ the production rate slightly decreases with increasing $B$ field. This can be understood from Eq.~(\\ref{ImLq_full}) as follows:\nIn this case, the quantity $\\mathfrak{a}_{i,f} = \\frac{1}{2} \\sqrt{ \\sqrt{ 4(eQ_{q})^{4} B^{4} + 16 (g \\omega_{i})^{4} \\mathcal{E} \\mathcal{B} } + 2(eQ_{q})^{2} B^{2} }$ (or $\\mathfrak{b}_{i,f} = \\frac{1}{2} \\sqrt{ \\sqrt{ 4(eQ_{q})^{4} B^{4} + 16 (g \\omega_{i})^{4} \\mathcal{E} \\mathcal{B} } - 2(eQ_{q})^{2} B^{2} }$ ) increases (decreases) with increasing $B$ field, while the product $\\mathfrak{a}_{i,f} \\mathfrak{b}_{i,f} = | \\vec{ \\mathcal{E} }_{i,f} \\cdot \\vec{ \\mathcal{B} }_{i,f} |= (g \\omega_{i})^{2} \\mathcal{E} \\mathcal{B}$ is independent of $B$ field.\nTherefore, at $\\theta_{\\mathcal{B} B} = \\pi\/2$, the quark production rate monotonically decreases due to the exponential factor ${\\rm exp}\\{- (m_{q}^{2}\/\\mathfrak{b}_{i,f}) n \\pi\\} $. \nThis result is independent of the sign of $\\omega_{i}$.\n\nOn the other hand, Fig.~5 shows that the quark production rate increases with increasing $B$ field at $\\theta_{\\mathcal{B}B} = 0$ and $\\pi$. This can be understood as follows:\nAt $\\theta_{\\mathcal{B}B} = 0, \\pi$, the quark production rate reads from Eq.~(\\ref{ImLq_full})\n\\begin{eqnarray}\n2 {\\Im m}\\, \\mathcal{L}_{\\rm{quark}}\n&=& \\frac{1}{4\\pi^{2}} \\sum_{i=1}^{N_{c}} |g\\omega_{i}| \\mathcal{E} \\mathcal{B}_{i,f} \\sum_{n=1}^{\\infty}\n\\frac{1}{n} {\\rm e}^{- \\frac{m_{q}^{2}}{|g\\omega_{i}|\\mathcal{E}} n \\pi } \\coth \\left( \\frac{ \\mathcal{B}_{i,f} }{ |g\\omega_{i}| \\mathcal{E} } n \\pi \\right),\n\\label{choromoEBandB}\n\\end{eqnarray}\nwhere the strength of the combined magnetic field has been defined as $\\mathcal{B}_{i,f} = |g\\omega_{i} \\mathcal{B} + eQ_{q}B | $ for $\\theta_{\\mathcal{B}B} = 0$, whereas $\\mathcal{B}_{i,f} = |g\\omega_{i} \\mathcal{B} - eQ_{q}B | $ for $\\theta_{\\mathcal{B}B} = \\pi$. \nThis production rate has a similar form with Eq.~(\\ref{chromoEB}).\nFirst, we consider the case $|g\\omega_{i} \\mathcal{B}| > |eQ_{q}B|$. When the chromo-magnetic field and the magnetic field are (anti)parallel to each other, $\\theta_{\\mathcal{B}B} = 0$ ($\\theta_{\\mathcal{B}B} = \\pi$), with $\\omega_{i} > 0$ ($\\omega_{i} < 0$), the strength of the combined magnetic field $\\mathcal{B}_{i,f}$ linearly increases with increasing $B$ field, and thus $\\coth \\left( \\frac{ \\mathcal{B}_{i,f} }{ |g\\omega_{i}| \\mathcal{E} } n \\pi \\right)$ slightly decreases and approaches unity.\nWhen $\\theta_{\\mathcal{B}B} = 0$ ($\\theta_{\\mathcal{B}B} = \\pi$) with $\\omega_{i} < 0$ ($\\omega_{i} > 0$), the field strength $\\mathcal{B}_{i,f}$ linearly decreases with increasing $B$ field, but $\\coth \\left( \\frac{ \\mathcal{B}_{i,f} }{ |g\\omega_{i}| \\mathcal{E} } n \\pi \\right)$ increases. Then, after summing over all the modes $i$, the production rate (\\ref{choromoEBandB}) at $\\theta_{\\mathcal{B}B} = 0$ ($\\theta_{\\mathcal{B}B} = \\pi$) monotonically increases with increasing $B$ field.\nIn the case of $|g\\omega_{i} \\mathcal{B}| \\le |eQ_{q}B|$, the production rate of both modes $i=1,2$ increases with increasing $B$ field regardless of the sign of $\\omega_{i}$, and thus the total production rate also monotonically increases.\n\\begin{comment}\n{\\color{red}[I replaced the following sentence by above.}\nWhen the chromo-magnetic field and the magnetic field are (anti-)parallel to each other $\\theta_{\\mathcal{B}B} = 0$ ($\\theta_{\\mathcal{B}B} = \\pi$) with $\\omega_{i} > 0$, the production rate increases (decreases) with increasing $B$-field.\nThe opposite behavior happens when $\\omega_{i} < 0$.\nAfter summing over all the modes $i$, the production rate at $\\theta_{\\mathcal{B}B} = 0, \\pi$ monotonically increases with increasing $B$-field, which can be seen in Fig. 5. \n{\\color{red}up to here.]}\n\\end{comment}\nFurthermore, we again obtain the angle dependence symmetric with respect to $\\theta_{\\mathcal{B}B} = \\pi\/2$ in the production rate.\n\n\n\\begin{figure}\n\\begin{minipage}{0.8\\hsize}\n\\begin{center}\n\\includegraphics[width=0.8 \\textwidth]{light_quark_prod_eB_angle-dep_ver2.pdf}\n\\vskip -0.1in\n\\end{center}\n\\end{minipage}\n\\caption{\nLight (up) quark production rate in a $B$ field as a function of $\\theta_{B_{\\rm{chro}B}}$, which stands for $\\theta_{\\mathcal{B} B}$ with a parallel configuration of $\\vec{\\mathcal{E}}$ and $\\vec{\\mathcal{B}}$. \nWe take strengths of chromo-electromagnetic fields as $g\\mathcal{B} = g \\mathcal{E} = 1$~GeV$^{2}$.\n}\n\\end{figure}\n\n\nNext we consider the case with $|e\\vec{E}| \\neq 0$ and $|e\\vec{B}|=0$.\nIn this case, $\\mathbb{F}_{i,f}^{2}$ and $\\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f}$ become [see Eq.~(\\ref{FFtilde})]\n\\begin{eqnarray}\n\\mathbb{F}_{i,f}^{2}\n&=& 2 \\left[- (eQ_{q_{f}})^{2} E^{2} - 2g \\omega_{i}eQ_{q_{i}} \\mathcal{E} E {\\rm{cos}} \\theta_{\\mathcal{E} E} \\right], \\nonumber \\\\\n\\mathbb{F}_{i,f} \\cdot \\tilde{\\mathbb{F}}_{i,f}\n&=& -4 \\left[ (g \\omega_{i})^{2} \\mathcal{E} \\mathcal{B} + g \\omega_{i}eQ_{q_{f}} \\mathcal{B} E {\\rm{cos}} \\theta_{\\mathcal{E}E} \\right]\\, .\n\\end{eqnarray}\nIn this expression, we have used ${\\rm{cos}} \\theta_{\\mathcal{B}E} = {\\rm{cos}} \\theta_{\\mathcal{E}E}$.\nAgain, the results are the same as those of the case where $\\vec{\\mathcal{B}}$ is antiparallel to $\\vec{\\mathcal{E}}$.\nFigure~6 shows the quark production rate as a function of the angle $\\theta_{\\mathcal{E}E}$ with several values of strength of the electric field.\nAs the electric field increases, the production rate increases for whole angle regions. \nThis can be understood in a similar way to the previous case as follows:\nAt $\\theta_{\\mathcal{E} E} = \\pi\/2$, the factor $\\mathfrak{a}_{i,f} \\mathfrak{b}_{i,f} = | \\vec{ \\mathcal{E} }_{i,f} \\cdot \\vec{ \\mathcal{B} }_{i,f} |= (g \\omega_{i})^{2} \\mathcal{E} \\mathcal{B}$ is independent of the electric field.\nAs for each factor, $\\mathfrak{a}_{i,f} = \\frac{1}{2} \\sqrt{ \\sqrt{ 4(eQ_{q})^{4} E^{4} + 16 (g \\omega_{i})^{4} \\mathcal{E} \\mathcal{B} } - 2(eQ_{q})^{2} E^{2} }$ decreases with increasing electric field, while $\\mathfrak{b}_{i,f} = \\frac{1}{2} \\sqrt{ \\sqrt{ 4(eQ_{q})^{4} E^{4} + 16 (g \\omega_{i})^{4} \\mathcal{E} \\mathcal{B} } + 2(eQ_{q})^{2} E^{2} }$ increases. \nThese behaviors are opposite to those of the previous case with $|e\\vec{E}| = 0$ and $|e\\vec{B}| \\neq 0$, and thus the production rate at $\\theta = \\pi\/2$ monotonically increases. At $\\theta_{\\mathcal{E} E} = 0, \\pi$, the quark production rate (\\ref{ImLq_full}) can be rewritten as\n\\begin{eqnarray}\n2 {\\Im m}\\, \\mathcal{L}_{\\rm{quark}}\n&=& \\frac{1}{4\\pi^{2}} \\sum_{i=1}^{N_{c}} \\mathcal{E}_{i,f} |g\\omega_{i}| \\mathcal{B} \\sum_{n=1}^{\\infty}\n\\frac{1}{n} {\\rm e}^{- \\frac{m_{q}^{2}}{\\mathcal{E}_{i,f}} n \\pi } \\coth \\left( \\frac{ |g\\omega_{i} |\\mathcal{B} }{ \\mathcal{E}_{i,f} } n \\pi \\right),\n\\label{chromoEBandE}\n\\end{eqnarray}\nwhere the strength of the combined electric field has been defined as $\\mathcal{E}_{i,f} = | g\\omega_{i} \\mathcal{E} + e Q_{q}E| $ for $\\theta_{\\mathcal{E} E} = 0$ and $\\mathcal{E}_{i,f} = | g\\omega_{i} \\mathcal{E} - e Q_{q}E| $ for $\\theta_{\\mathcal{E} E} = \\pi$. \nIn the case of $|g\\omega_{i} \\mathcal{E}| > |e Q_{q}E|$, when the chromo-electric field and the electric field are (anti)parallel to each other, $\\theta_{\\mathcal{E} E} = 0$ ($\\theta_{\\mathcal{E} E} = \\pi$), with $\\omega_{i} > 0$ ($\\omega_{i} < 0$), the strength of the combined electric field $\\mathcal{E}_{i,f}$ linearly increases with increasing $E$ field, and thus $\\coth \\left( \\frac{ |g\\omega_{i} |\\mathcal{B} }{ \\mathcal{E}_{i,f} } n \\pi \\right)$ monotonically increases.\nWhen $\\theta_{\\mathcal{E} E} = 0$ ($\\theta_{\\mathcal{E} E} = \\pi$) with $\\omega_{i} < 0$ ($\\omega_{i} > 0$), the field strength $\\mathcal{E}_{i,f}$ linearly decreases with increasing $E$ field, and $\\coth \\left( \\frac{ |g\\omega_{i} |\\mathcal{B} }{ \\mathcal{E}_{i,f} } n \\pi \\right)$ slightly decreases and approaches unity. \nThen, after summing over all the modes $i$, the production rate (\\ref{chromoEBandE}) at $\\theta_{\\mathcal{E} E} = 0$ ($\\theta_{\\mathcal{E} E} = \\pi$) monotonically increases with increasing $E$ field.\nOn the other hand, in the case of $|g\\omega_{i} \\mathcal{E}| \\le |e Q_{q}E|$, the production rate of both modes $i=1,2$ increases with increasing $E$ field regardless of the sign of $\\omega_{i}$, and thus the total production rate also monotonically increases.\n\\begin{comment}\n{\\color{red}[I replaced the following sentence by above}\nThis production rate is the same form as Eq. (\\ref{choromoEBandB}) and thus enhanced as the strength of electric field $E$ increases.\n{\\color{red}up to here.]}\n\\end{comment}\nFrom these results, we expect that strong EM fields created in the early stage of relativistic heavy-ion collisions would largely affect quark productions from a glasma (chromo-EM fields) depending on the field configurations, and would thus possibly influence the formation of QGP.\n\n\n\\begin{figure}\n\\begin{minipage}{0.8\\hsize}\n\\begin{center}\n\\includegraphics[width=0.8 \\textwidth]{light_quark_prod_eE_ver2.pdf}\n\\vskip -0.1in\n\\end{center}\n\\end{minipage}\n\\caption{\nLight (up) quark production rate in an $E$ field as a function of $\\theta_{E_{\\rm{chro}}E}$, which stands for $\\theta_{\\mathcal{E} E}$ with a parallel configuration of $\\vec{\\mathcal{E}}$ and $\\vec{\\mathcal{B}}$. \nWe take strengths of chromo-electromagnetic fields as $g\\mathcal{B} = g \\mathcal{E} = 1$ GeV$^{2}$.\n}\n\\end{figure}\n\n\n\\subsection{Weiss potential with electromagnetic fields}\n\nIn this subsection, we will investigate the effects of EM fields on the confinement-deconfinement phase transition by using the effective potential of the Polyakov loop in the presence of EM fields. \n\nPrior to going into the details, let us briefly explain the effective potential without external fields being imposed. The one-loop calculation at finite temperature in SU(2) gauge theory and in the massless fermion limit yields the effective potential for the temporal component of the gauge field $(C = \\frac{ g \\bar{\\mathcal{A}}_{4} }{ 2 \\pi T })$ as~\\cite{Weiss:1980rj,Weiss:1981ev, Gross:1980br}\n\\begin{eqnarray}\nV^{\\rm Weiss}[C]=V_{\\rm YM}^{\\rm Weiss}[C]+V^{\\rm Weiss}_{\\rm quark}[C]\\, ,\n\\end{eqnarray}\nwhere the YM and quark parts are given, respectively, by \n\\begin{eqnarray}\nV_{\\rm YM}^{\\rm Weiss}[C]&=&- \\frac{ 3 }{ 45 } \\pi^{2} T^{4} + \\frac{3}{4} \\pi^{2} T^{4} C^{2} (1-C)^{2}\\, ,\\label{Weiss_YM}\\\\\nV_{\\rm quark}^{\\rm Weiss}[C]&=&- \\frac{7}{90} \\pi^{2} T^{4} + \\frac{1}{6} \\pi^{2} T^{4} C^{2} ( 2 - C^{2} )\\, .\\label{Weiss_quark}\n\\end{eqnarray}\nThis result is called the Weiss potential. In Fig.~7, we show the Weiss potential $V^{\\rm Weiss}[C]$ and its breakdown. \nWe see that in the YM part, the minima appear at $C=0$ and $C = 1$, reflecting the center symmetry $C\\to C+1$ in SU(2). Thus, selecting one of the two minima spontaneously breaks the center symmetry.\nSince the system should be in the deconfined phase in the high-temperature region where a perturbative approach becomes valid,\nthis result seems to be natural. \nThe quark part of the effective potential explicitly breaks the center symmetry, and $C=0$ and $C=1$ are no longer degenerated. \nIn the presence of the quark part, $C=0$ is favored, which corresponds to the deconfined phase. We are now going to investigate how this picture is modified by the presence of external EM fields.\n\\begin{figure}\n\\begin{minipage}{0.8\\hsize}\n\\begin{center}\n\\includegraphics[width=0.8 \\textwidth]{weiss_potentials.pdf}\n\\vskip -0.1in\n\\end{center}\n\\end{minipage}\n\\caption{Weiss potential as a function of $C$. \nConstant terms which are independent of $C$ are subtracted.\n}\n\\end{figure}\n\n\nNow we come back to our most general results (\\ref{action_gluon}), (\\ref{action_ghost}), and (\\ref{action_quark}). \nTaking the vanishing limit of the chromo-EM fields, $\\vec{\\mathcal{E}}, \\vec{\\mathcal{B}} \\to 0$, but keeping the Polyakov loop $\\bar{\\mathcal{A}}_{4}$ and EM fields nonzero in the results , we obtain the effective potential\n\\begin{eqnarray}\nV_{\\rm eff} [\\bar{\\mathcal{A}}_{4}, E, B]\n&=& - \\frac{S_{\\rm eff}}{\\int dx^{4} } \\nonumber \\\\\n&=& \\frac{1}{32 \\pi^{2}} \\sum_{h=1}^{N_{c}^{2}-1} \\int^{\\infty}_{0} \\frac{ds}{s^{3}} \\left\\{ 4-2 \\right\\} 2 \\sum_{n=1}^{\\infty} {\\rm e}^{ i\\frac{n^{2}}{4T^{2}s} }\n{\\rm{cos}} \\left( \\frac{ g v_{h} \\bar{\\mathcal{A}}_{4} }{T} n \\right) \\nonumber \\\\\n&& - \\frac{1}{8\\pi^{2}} \\sum_{i=1}^{N_{c}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{ds}{s^{3}} {\\rm e}^{-im_{q_{f}}^{2}s} (\\mathfrak{a}_{f}s)( \\mathfrak{b}_{f}s ){\\rm{cot}}(\\mathfrak{a}_{f}s) {\\rm{coth}}(\\mathfrak{b}_{f}s) \\nonumber \\\\\n&& \\times 2 \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{i \\frac{1}{4T^{2}} \\mathfrak{h}_{f}(s) n^{2}} {\\rm{cos}} \\left( \\frac{ g \\omega_{i} \\bar{\\mathcal{A}}_{4} }{T} n \\right).\n\\end{eqnarray}\nwhere $\\mathfrak{a}_f$ and $\\mathfrak{b}_f$ are just given by the EM fields as\n\\begin{eqnarray}\n\\mathfrak{a}_{f}\n= \\frac{1}{2} \\sqrt{ \\sqrt{ F_{f}^{4} + (F_{f}\\cdot \\tilde{F}_{f})^{2} } + F_{f}^{2} }\\, , \\qquad \n\\mathfrak{b}_{f}\n= \\frac{1}{2} \\sqrt{ \\sqrt{ F_{f}^{4} + (F_{f} \\cdot \\tilde{F}_{f} )^{2} } - F_{f}^{2} }\\, ,\n\\end{eqnarray}\nwith $F_{f}^{2} = 2 (eQ_{q_{f}})^{2} ( \\vec{B}^{2} - \\vec{E}^{2} ) $ and $F_{f} \\cdot \\tilde{F}_{f} = -4 (eQ_{q_{f}})^{2} \\vec{E} \\cdot \\vec{B}$. \nThe factor $\\mathfrak{h}_{f}(s)$ is given by\n\\begin{eqnarray}\n\\mathfrak{h}_{f}(s)\n&=& \\frac{ \\mathfrak{b}_{f}^{2} - {\\mathfrak{e}}_{f}^{2} }{ \\mathfrak{a}_{f}^{2} + \\mathfrak{b}_{f}^{2} } \\mathfrak{a}_{f} {\\rm{cot}}( \\mathfrak{a}_{f}s ) + \\frac{ \\mathfrak{a}_{f}^{2} + {\\mathfrak{e}}_{f}^{2} }{ \\mathfrak{a}_{f}^{2} + \\mathfrak{b}_{f}^{2} } \\mathfrak{b}_{f} {\\rm{coth}}(\\mathfrak{b}_{f} s)\\, ,\n\\end{eqnarray}\nwhere ${\\mathfrak{e}}_{f}^{2} = (u_{\\alpha} F_{f}^{\\alpha \\mu})( u_{\\beta} F_{f \\mu}^{\\beta}) = (eQ_{q_{f}})^{2} E^{2} $ with $u_{\\mu} = (1,0,0,0)$.\nHere we have subtracted divergences appearing in the zero-temperature contribution, which are independent of\n$\\bar{\\mathcal{A}}_{4}$.\n\n\\subsubsection{Weiss potential in magnetic fields}\n\nConsider a pure magnetic field case, $\\vec{E} \\to 0$, $\\vec{B} \\neq 0$.\nThen, the effective potential reads,\n\\begin{eqnarray}\nV_{\\rm eff} [\\bar{\\mathcal{A}}_{4}, B]\n&=&\\frac{1}{32 \\pi^{2}} \\sum_{h=1}^{N_{c}^{2}-1} \\int^{\\infty}_{0} \\frac{ds}{s^{3}} \\left\\{ 4-2 \\right\\} 2 \\sum_{n=1}^{\\infty} {\\rm e}^{ i\\frac{n^{2}}{4T^{2}s} }\n\\, {\\rm{cos}} \\left( \\frac{ g v_{h} \\bar{\\mathcal{A}}_{4} }{T} n \\right) \\nonumber \\\\\n&& - \\frac{1}{8\\pi^{2}} \\sum_{i=1}^{N_{c}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{ds}{s^{3}} {\\rm e}^{-im_{q_f}^{2}s} ( e|Q_{q_{f}}|B s) {\\rm{cot}}( e|Q_{q_{f}}|Bs) \\nonumber \\\\\n&&\\qquad \\times 2 \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{i \\frac{ n^{2} }{ 4T^{2}s } }\\, {\\rm{cos}} \\left( \\frac{ g \\omega_{i}\\bar{\\mathcal{A}}_{4} }{T} n \\right).\n\\label{VB}\n\\end{eqnarray} \nWe rewrite the proper time integrals in two steps. Recall that the integral should be defined with an infinitesimally small number $\\delta$ which makes the contour slightly inclined to avoid the poles along the real axis (in the second term). Then we can easily change the contour from $[0,\\infty]$ along the real axis to $[-i\\infty,0]$ along the imaginary axis (the Wick rotation), since there is no pole along the imaginary axis. Finally, by renaming the variable $s$ as $-i\\sigma$, we obtain the following representation with integrals defined by real functions\\footnote{The second line of Eq.~(\\ref{PolyakovLoopwithB}) coincides with Eq.~(B.6) in the appendix of Ref.~\\cite{Bruckmann:2013oba}.}:\n\\begin{eqnarray}\nV_{\\rm eff} [\\bar{\\mathcal{A}}_{4}, B]\n&=& - \\frac{ 1 }{ 8\\pi^{2}} \\sum_{h=1}^{N_{c}^{2}-1} \\int^{\\infty}_{0} \\frac{d\\sigma }{\\sigma^{3}} \\sum_{n=1}^{\\infty} {\\rm e}^{- \\frac{n^{2}}{4T^{2}\\sigma} }\\, {\\rm{cos}}\\left( \\frac{ g v_{h} \\bar{\\mathcal{A}}_{4} }{ T } n \\right) \\nonumber \\\\\n&& +\\frac{1}{4\\pi^{2}} \\sum_{i=1}^{N_{c}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{d\\sigma}{\\sigma^{2}} {\\rm e}^{-m_{q_{f}}^{2}\\sigma} (e|Q_{q_{f}}|B) {\\rm{coth}}(e|Q_{q_{f}}|B\\sigma ) \\nonumber \\\\\n&&\\qquad \\times \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{ - \\frac{n^{2}}{4T^{2}\\sigma} }\\, {\\rm{cos}} \\left( \\frac{ g \\omega_{i}\\bar{\\mathcal{A}}_{4} }{ T} n \\right).\n\\label{PolyakovLoopwithB}\n\\end{eqnarray}\nFor simplicity, we shall restrict ourselves to $N_{c}=2$, which provides us with all the essential features of the perturbative effective potential in the presence of EM fields. In this case, the eigenvalues $\\omega_{i}$ and $v_{h}$ are simply given by $\\omega_{i} = \\pm 1\/2$ and $v_{h} = 0, \\pm 1$.\nThe effective potential reads,\n\\begin{eqnarray}\nV_{\\rm eff}[ C, B ]\n&=& - \\frac{ 3 }{ 45 } \\pi^{2} T^{4} + \\frac{3}{4} \\pi^{2} T^{4} C^{2} (1-C)^{2} \\label{effectiveVTB} \\\\\n&& + \\frac{1}{2\\pi^{2}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{d\\sigma }{\\sigma^{2}} {\\rm e}^{-m_{q_{f}}^{2}\\sigma} (e|Q_{q_{f}}|B) {\\rm{coth}}(e|Q_{q_{f}}|B\\sigma ) \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{ - \\frac{n^{2}}{4T^{2}\\sigma} } {\\rm{cos}} \\left( C\\pi n \\right)\\, . \\nonumber \n\\end{eqnarray}\nThe first line does not depend on the magnetic field and corresponds to the YM part $V_{\\rm YM}$. This is nothing but the Weiss potential~(\\ref{Weiss_YM}) \\cite{Weiss:1980rj}. The second line corresponds to the quark part $V_{\\rm quark}$, and the integral and summation over $n$ can be easily performed numerically. \nFrom now on, we further restrict ourselves to the one flavor $f=1$ with the electric charge $Q_{q_{f}}=1$ for simplicity.\nNow, analytic expressions are available in two limiting cases: One is the $B\\to 0$ and $m_q\\to 0$ limit, where the quark part of the effective potential is reduced to that of the Weiss potential (\\ref{Weiss_YM}):\n\\begin{eqnarray}\nV_{\\rm quark} [C]\n&= & - \\frac{7}{90} \\pi^{2} T^{4} + \\frac{1}{6} \\pi^{2} T^{4} C^{2} ( 2 - C^{2} )=V^{\\rm Weiss}_{\\rm quark}[C]\\, .\n\\end{eqnarray}\nThe other is the strong magnetic field limit: $eB \\gg m_{q}^{2}$, where the quark part can be written as\n\\begin{eqnarray}\nV_{\\rm quark} [C, B]\n&=& - 2 \\frac{ (eB) }{ \\pi^{2} } T^{2} \\left\\{ \\frac{ \\pi^{2} }{ 12 } - \\frac{ (C\\pi)^{2} }{ 4 } \\right\\}. \\label{quarkVstrongB}\n\\end{eqnarray}\nFigure~8 shows the magnetic field dependence of the quark part of the effective potential which is given by the second line of Eq.~(\\ref{effectiveVTB}). \nHere, we show only one flavor contribution with $x=m_q^2\/T^2=0.5$. An important observation is that as the magnetic field increases, the explicit breaking of the center symmetry is enhanced, and $C=0$ (deconfined phase) becomes more stable. This is qualitatively consistent with the analytic representation at strong magnetic fields [see Eq.~(\\ref{quarkVstrongB})] in that the potential value at $C=0$ becomes more negative and the rising behavior becomes steeper with increasing magnetic field. The enhancement of the center symmetry-breaking effects due to increasing magnetic field indicates that the quark loop interacting with magnetic fields can be one of the important sources for reducing the (pseudo)critical temperature $T_{c}$ of confinement-deconfinement phase transition, as observed in recent lattice QCD simulations \\cite{Bruckmann:2013oba}. In the last part of this subsection, we will see within a phenomenological model that this is indeed the case.\n\n\n\n\\begin{figure}[t]\n\\begin{minipage}{0.8\\hsize}\n\\begin{center}\n\\includegraphics[width=0.8 \\textwidth]{potential_eB_full_paper.pdf}\n\\end{center}\n\\end{minipage}\n\\caption{Quark part of the effective potential as a function of $C$ for several values of magnetic fields. $x$ and $y$ are given as $x = m_{q}^{2}\/T^{2}$ and $y = eB\/T^{2}$, respectively. \n}\n\\end{figure}\n\n\n\n\n\\subsubsection{Weiss potential in electric fields}\n\n\nIn the case of a pure electric field, $\\vec{B} \\to 0$ and $\\vec{E} \\neq 0$, the situation is a bit subtle. \nThe effective potential of the quark part can be written as\n\\begin{eqnarray}\nV_{\\rm quark}[\\bar{\\mathcal{A}}_{4}, E]\n&=& - \\frac{ 1}{2\\pi^{2}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{ds}{s^{3}} {\\rm e}^{-im_{q_{f}}^{2}s} \\left( e|Q_{q_{f}}|Es \\right) {\\rm{coth}} \\left( e| Q_{q_{f}}| Es \\right) \\nonumber \\\\\n&& \\quad \\times \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{i \\frac{n^2}{4T^{2}s} \\left( e |Q_{q_{f}}| Es \\right) {\\rm{coth}} \\left( e |Q_{q_{f}}|Es \\right) } {\\rm{cos}} \\left( \\frac{ g\\bar{\\mathcal{A}}_{4} }{2T } n \\right).\n\\end{eqnarray}\nNote that we cannot reach this result from Eq.~(\\ref{VB}) by replacing $B$ with $iE$, unlike the zero-temperature contribution. This is due to the form of the factor $\\mathfrak{h}_{f}(s)=(e|Q_{q_f}|Es){\\rm coth}(e|Q_{q_f}|Es)$ in the exponential. Because of this factor, the full calculation (even numerical evaluation) is rather difficult. Furthermore, since there are singularities (poles) on the imaginary axis, we cannot perform the Wick rotation of the proper time $s$, unlike the Weiss potential in magnetic fields. \nTo avoid these difficulties, we expand the effective potential with respect to the electric field. Using $x{\\rm{coth}}x \\sim 1 + x^{2}\/3 \\cdots$, we get\n\\begin{eqnarray}\nV_{\\rm quark}[\\bar{\\mathcal{A}}_{4}, E]\n&=& - \\frac{1}{2\\pi^{2}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{ds}{s^{3}} {\\rm e}^{-im_{q_{f}}^{2}s } \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{i \\frac{ n^{2} }{4T^{2}s} } {\\rm{cos}} \\left( \\frac{ g \\bar{\\mathcal{A}}_{4} }{2T } n \\right) \\nonumber \\\\\n&& - \\frac{ 1}{6 \\pi^{2}} \\sum_{f=1}^{N_{f}} (e|Q_{q_{f}}| E)^{2} \\int^{\\infty}_{0} \\frac{ds}{s} {\\rm e}^{-im_{q_{f}}^{2}s} \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{i \\frac{ n^{2}}{4T^{2}s} } \\left( 1 + \\frac{ n^{2} }{ 4 T^{2} s } \\right) {\\rm{cos}} \\left( \\frac{ g \\bar{\\mathcal{A}}_{4} }{ 2T } n \\right) \\nonumber \\\\\n&& + {\\cal O}(E^{4})\\, .\n\\end{eqnarray}\nAt this stage, we can perform the Wick rotation for the proper time $s$. Then, the effective potential reads\n\\begin{eqnarray}\nV_{\\rm quark}[C, E]\n&=& \n\\frac{1}{2\\pi^{2}} \\sum_{f=1}^{N_{f}} \\int^{\\infty}_{0} \\frac{d\\sigma}{\\sigma^{3}} {\\rm e}^{-m_{q_{f}}^{2}\\sigma} \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{- \\frac{n^{2}}{4T^{2}\\sigma} } {\\rm{cos}} \\left( \n C \\pi n \\right) \\nonumber \\\\\n && - \\frac{1}{6\\pi^{2}} \\sum_{f=1}^{N_{f}} (e|Q_{q_{f}}|E )^{2} \\int^{\\infty}_{0} \\frac{d\\sigma}{\\sigma} {\\rm e}^{-m_{q_{f}}^{2}\\sigma} \n \\sum_{n=1}^{\\infty} (-1)^{n} {\\rm e}^{- \\frac{ n^{2} }{ 4T^{2} \\sigma } } \\left( 1 - \\frac{ n^{2} }{ 4 T^{2} \\sigma } \\right) {\\rm{cos}} \\left( C \\pi n \\right) \\nonumber \\\\\n && + {\\cal O}(E^{4})\\, .\n\\end{eqnarray}\nThe systematic expansion with respect to the $E$ field is possible, and the integral and sum can be performed numerically at each order.\n\n\nIn Fig. 9 we show the electric field dependence of the quark part of the effective potential. From this figure, we see that the electric field decreases the explicit breaking of the center symmetry. This is completely opposite to the $B$ dependence of the effective potential. Thus, we expect that $T_{c}$ increases with increasing $E$ field and approaches the $T_{c}$ of the pure YM theory.\n\n\n\\begin{figure}[t] \n\\begin{minipage}{0.8\\hsize}\n\\begin{center}\n\\includegraphics[width=0.8 \\textwidth]{potential_eE_OrderE2_paper.pdf}\n\\end{center}\n\\end{minipage}\n\\caption{ Quark part of the effective potential as a function of $C$ for several values of electric fields. $x$ and $y$ are given as $x = m_{q}^{2}\/T^{2}$ and $y = eE\/T^{2}$, respectively. \n}\n\\end{figure}\n\n\n\\subsubsection{Phenomenological analysis on $T_c(B)$}\n\nWe have seen that imposing magnetic fields enhances the explicit breaking of the center symmetry. What we have evaluated is a perturbative contribution (in the sense that we assume that the coupling is small enough), and thus we discussed how the Weiss potential (that is also evaluated in a perturbative framework) is modified in the presence of the EM fields. Within this perturbative calculation, we are not able to approach the region where phase transition will take place. Indeed, even if the quark part of the effective potential depends on the magnetic fields $V_{\\rm quark}[C,B]$, the total effective potential $V_{\\rm eff}[C,B]=V_{\\rm YM}[C]+V_{\\rm quark}[C,B]$ selects the center broken state $C=0$, and thus confinement-deconfinement phase transition never occurs within this perturbative framework. However, recall that the magnetic field can affect the effective potential of the Polyakov loop only through the quark loop at leading order. Therefore, we expect that even the perturbative evaluation of the quark part $V_{\\rm quark}[C,B]$ can make sense if combined with some nonperturbative effective potential $V_{\\rm YM}^{\\rm nonpert}[C]$ for study of the effects of magnetic fields on the phase transition. Here we discuss whether this is indeed the case. \n\n\n\nLet us introduce a simple model of a gluonic potential reproducing confinement-deconfinement phase transition,\n\\begin{eqnarray}\n\\mathcal{U}[C]\n&=& -\\frac{1}{2}a(T) \\Phi^{2} + b(T)\\, {\\rm{ln}} \\left[ 1 - 6 \\Phi^{2} + 8 \\Phi^{3} - 3 \\Phi^{4} \\right]\n\\label{phenomenological_potential}\n\\end{eqnarray}\nwith\n\\begin{eqnarray}\na(T) = a_{0} + a_{1}(T_{0}\/T) + a_{2}(T_{0} \/ T )^{2}, \\ \\ \\ \\ b(T) = b_{3}(T_{0}\/T)^{3}.\n\\end{eqnarray}\nNow, we consider the $N_{c}=3$ case.\nHere the parameters are\n$a_{0} = 3.51,\\, a_{1} = -2.47,\\, a_{2} = 15.2,\\, b_{3} = -1.75$, and $T_{0} = 270$ MeV, which are fixed to reproduce the quenched lattice QCD results \\cite{Roessner:2006xn}.\nInstead of $V_{YM}$, we employ this phenomenological potential (\\ref{phenomenological_potential}) and combine it with $V_{\\rm quark} [C,B]$. In this way, we can study how the temperature dependence of the Polyakov loop changes with magnetic fields. \nNotice that the quark part of the perturbative effective potential $V_{\\rm quark}[C,B]$ with $N_{c}=3$ is the same as that of the one with $N_{c}=2$, since the quark with $\\omega_{3}=0$ does not contribute to the potential.\nTherefore, we can use the same potential evaluated in the second line of Eq.~(\\ref{effectiveVTB}).\nThe result is shown in Fig. 10. In this analysis, we have used $\\omega_{i} = \\pm 1\/2, 0$ and a constituent quark mass $m_{q} = 350$ MeV. Thanks to the explicit center symmetry breaking, the Polyakov loop increases with increasing $B$ field, in particular below the phase transition temperature, \nwhich eventually brings about decreasing pseudocritical temperature $T_c(B)2$. Let $\\eta$ be an umbilical normal vector of $M^n$. If the normal connection is flat, then $\\eta$ is in the direction of the distinguished normal vector $\\xi$.\n\\end{thm}\n\nIt is proved in \\cite{YTST} that there exists neither totally geodesic real hypersurfaces nor totally umbilical real hypersurfaces of a complex projective space. From Theorem 1.1, we can generalize this result to the following\n\n\\begin{cor}\n In $\\mathbf{P}^{\\frac{n+p}{2}}(\\mathbf{C})$ ($n>2$) there exists neither totally geodesic CR submanifolds of maximal CR dimension nor totally umbilical CR submanifolds of maximal CR dimension, whose normal connections are flat.\n\\end{cor}\n\nNext we consider the converse of Theorem 1.1. For 3-dimensional submanifolds, we prove the following theorem.\n\n\\begin{thm}\n Let $M^3$ be a 3-dimensional CR submanifold of maximal CR dimension of $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$, $p>1$. If the normal connection is flat, then $p=3$ and the distinguished normal vector $\\xi$ is umbilical.\n\\end{thm}\n\nAs the application of Theorem 1.1 and Theorem 1.3, we prove the non-existence of a class of CR submanifolds of maximal CR dimension of a complex projective space.\n\n\\begin{thm}\n In $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$ $(p>1)$ there exist no 3-dimensional pseudo-umbilical CR submanifolds of maximal CR dimension with flat normal connection.\n\\end{thm}\n\n\\begin{cor}\n In $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$ $(p>1)$ there exist no 3-dimensional minimal CR submanifolds of maximal CR dimension with flat normal connection.\n\\end{cor}\n\n\\begin{remark}\nWe should note that for some other ambient spaces there may exist pseudo-umbilical submanifolds with flat normal connection. For instance, from results of \\cite{BYC1} we know that minimal surfaces of a hypersurface of a Euclidean space $\\mathbf{E}^m$ and the product of two plane circles in $\\mathbf{E}^4$ are both pseudo-umbilical with flat normal connection.\n\\end{remark}\n\n\\section{\\bf Preliminaries}\n\\vskip 0.4 true cm\n\nLet $M^n$ be a CR submanifold of maximal CR dimension of $\\mathbf{P}^{\\frac{n+p}{2}}(\\mathbf{C})$. For each point $x\\in M^n$, the real dimension of the holomorphic tangent space $H_x(M^n)$ is $n-1$. Therefore $M^n$ is necessarily odd-dimensional and there exists a unit normal vector $\\xi_x$ such that\n$$JT_x(M^n)\\subset T_x(M^n)\\oplus span\\{\\xi_x\\}.$$\nWrite\n\\begin{equation}\\label{defU}\n U_x=-J\\xi_x.\n\\end{equation}\nIt is easy to see that $U_x$ is a unit tangent vector of $M^n$ which spans the totally real tangent space $R_x(M^n)$. So a tangent vector $Z_x$ of $M^n$ is a holomorphic tangent vector, i.e., $Z_x\\in H_x(M^n)$ if and only if $Z_x$ is orthogonal to $U_x$. For any $X\\in TM^n$, we may write\n\\begin{equation}\\label{defFu}\n JX=FX+u(X)\\xi,\n\\end{equation}\nwhere $F$ is a skew-symmetric endomorphism acting on $TM^n$, $u$ is the one form dual to $U$. It is proved in \\cite{MDMO1} that\n\\begin{equation}\\label{F2}\n F^2X=-X+u(X)U,\n\\end{equation}\n\\begin{equation}\\label{FU}\n u(FX)=0,\\ FU=0,\n\\end{equation}\nwhich imply $M^n$ has an almost contact structure.\n\nLet $T^{\\bot}_1(M^n)$ be the subbundle of the normal bundle $T^{\\bot}(M^n)$ defined by\n$$\n T^{\\bot}_1(M^n)=\\{\\eta\\in T^{\\bot}(M^n)|\\langle \\eta,\\xi\\rangle=0\\},\n$$\nwhere $\\langle,\\rangle$ is the inner product of the tangent space of $\\mathbf{P}^{\\frac{n+p}{2}}(\\mathbf{C})$. Since $T^{\\bot}_1(M^n)$ is $J$-invariant, we can choose a local orthonormal basis of $T^{\\bot}(M^n)$ in the following way:\n\\begin{equation}\\label{nframe}\n \\xi,\\ \\xi_1,\\cdots,\\xi_q,\\ \\xi_{1^*},\\cdots,\\xi_{q^*},\n\\end{equation}\nwhere $\\xi_{a^*}=J\\xi_a,\\ a=1,\\cdots,q$ and $q=\\frac{p-1}{2}$.\n\nLet $A, A_a, A_{a^*}$ denote the shape operators for the normals $\\xi, \\xi_a, \\xi_{a^*}$, respectively. Write\n$$\n D\\xi=\\sum_a(s_a\\xi_a+s_{a^*}\\xi_{a^*}),\n$$\n$$\n D\\xi_a=-s_a\\xi+\\sum_b(s_{ab}\\xi_b+s_{ab^*}\\xi_{b^*}),\n$$\n$$\n D\\xi_{a^*}=-s_{a^*}\\xi+\\sum_b(s_{a^*b}\\xi_b+s_{a^*b^*}\\xi_{b^*}),\n$$\nwhere $s$'s are the coefficients of the normal connection $D$. Let $\\overline\\nabla$ be the connection of $\\mathbf{P}^{\\frac{n+p}{2}}(\\mathbf{C})$. By using the classical Weingarten formula and noting that $\\overline\\nabla J=0$, one can obtain the following relations (\\cite{MDMO1}):\n\\begin{equation}\\label{Aa*}\n A_{a^*}X=FA_aX-s_a(X)U,\n\\end{equation}\n\\begin{equation}\\label{Aa}\n A_{a}X=-FA_{a^*}X+s_{a^*}(X)U,\n\\end{equation}\n\\begin{equation}\\label{trAa*}\n {\\rm trace} A_{a^*}=-s_a(U),\\ {\\rm trace} A_a=s_{a^*}(U),\n\\end{equation}\n\\begin{equation}\\label{sa*}\n s_{a^*}(X)=\\langle A_aU,X\\rangle,\n\\end{equation}\n\\begin{equation}\\label{sa}\n s_{a}(X)=-\\langle A_{a^*}U,X\\rangle,\n\\end{equation}\n\\begin{equation}\\label{sab}\n s_{a^*b^*}=s_{ab},\\ s_{a^*b}=-s_{ab^*},\n\\end{equation}\n\\begin{equation}\\label{gbU}\n \\nabla_XU=FAX,\n\\end{equation}\nwhere $X,Y$ are tangent to $M^n$, $\\nabla$ is the connection induced from $\\overline\\nabla$, and $a,b=1,\\cdots,q$.\n\nTo prove our theorems, we need to write the classical equations of Codazzi and Ricci for submanifolds. For the sake of convenience, set $\\xi_0=\\xi$ and $\\alpha,\\beta=0,1,\\cdots,q,1^*,\\cdots,q^*$. Recall the equation of Codazzi for the normal vector $\\xi$ is given by \\cite{MDMO1}\n\\begin{align}\\label{CodazziA}\n (\\nabla_XA)Y-(\\nabla_YA)X =&-(\\overline{R}(X,Y)\\xi)^{\\top}+\\sum_b\\{s_b(X)A_bY-s_b(Y)A_bX\\}\\notag\\\\\n & +\\sum_b\\{s_{b^*}(X)A_{b^*}Y-s_{b^*}(Y)A_{b^*}X\\},\n\\end{align}\nwhere $\\overline{R}$ is the Riemannian curvature tensor of $\\mathbf{P}^{\\frac{n+p}{2}}(\\mathbf{C})$, $X,Y$ are tangent to $M^n$, $(\\overline{R}(X,Y)\\xi)^{\\top}$ is the tangent part of $\\overline{R}(X,Y)\\xi$, and $(\\nabla_XA)Y$ is defined as\n\\begin{equation}\\label{deflA}\n (\\nabla_XA)Y=\\nabla_XAY-A(\\nabla_XY).\n\\end{equation}\nRecall that the equation of Ricci is given by \\cite{MDMO1}\n\\begin{equation}\\label{eqRicci}\n \\langle R^{\\bot}(X,Y)\\xi_{\\alpha},\\xi_{\\beta}\\rangle=\\langle \\overline{R}(X,Y)\\xi_{\\alpha},\\xi_{\\beta}\\rangle+\\langle [A_{\\alpha},A_{\\beta}]X,Y\\rangle,\n\\end{equation}\nwhere $R^{\\bot}$ is the curvature tensor of the normal connection, and\n\\begin{equation*}\n [A_{\\alpha},A_{\\beta}]=A_{\\alpha}\\circ A_{\\beta}-A_{\\beta}\\circ A_{\\alpha}.\n\\end{equation*}\n\n\nNote that the Riemannian curvature tensor $\\overline{R}$ of $\\mathbf{P}^{\\frac{n+p}{2}}(\\mathbf{C})$ is given by\n\\begin{equation}\\label{curvatureP}\n\\overline{R}(\\overline{X},\\overline{Y})\\overline{Z}= \\langle\\overline{Y},\\overline{Z}\\rangle\\overline{X}\n-\\langle\\overline{X},\\overline{Z}\\rangle\\overline{Y}+\\langle J\\overline{Y},\\overline{Z}\\rangle J\\overline{X}\n-\\langle J\\overline{X},\\overline{Z}\\rangle J\\overline{Y}+2\\langle \\overline{X},J\\overline{Y}\\rangle J\\overline{Z},\n\\end{equation}\nwhere $\\overline{X},\\overline{Y},\\overline{Z}$ are tangent to $\\mathbf{P}^{\\frac{n+p}{2}}(\\mathbf{C})$. From (\\ref{curvatureP}),(\\ref{defU}),(\\ref{defFu}), we calculate\n\\begin{equation*}\n \\overline{R}(X,Y)\\xi=u(Y)FX-u(X)FY+2\\langle FX,Y\\rangle U,\n\\end{equation*}\n\\begin{equation*}\n \\overline{R}(X,Y)\\xi_a=-2\\langle FX,Y\\rangle\\xi_{a^*},\n\\end{equation*}\n\\begin{equation*}\n \\overline{R}(X,Y)\\xi_{a^*}=2\\langle FX,Y\\rangle\\xi_{a}.\n\\end{equation*}\nTherefore the equation of Codazzi (\\ref{CodazziA}) becomes \\cite{MDMO1}\n\\begin{align}\\label{equationCodazziA}\n (\\nabla_XA)Y-(\\nabla_YA)X = & u(X)FY-u(Y)FX-2\\langle FX,Y\\rangle U \\notag\\\\\n & +\\sum_b\\{s_b(X)A_bY-s_b(Y)A_bX\\}\\notag\\\\\n &+\\sum_b\\{s_{b^*}(X)A_{b^*}Y-s_{b^*}(Y)A_{b^*}X\\}.\n\\end{align}\nThe equation of Ricci (\\ref{eqRicci}) becomes\n\\begin{equation}\\label{eqRicciAAa}\n \\langle R^{\\bot}(X,Y)\\xi,\\xi_{a}\\rangle=\\langle [A,A_{a}]X,Y\\rangle,\n\\end{equation}\n\\begin{equation}\\label{eqRicciAAa*}\n \\langle R^{\\bot}(X,Y)\\xi,\\xi_{a^*}\\rangle=\\langle [A,A_{a^*}]X,Y\\rangle,\n\\end{equation}\n\\begin{equation}\\label{eqRicciAaAb}\n \\langle R^{\\bot}(X,Y)\\xi_a,\\xi_b\\rangle=\\langle [A_a,A_b]X,Y\\rangle,\n\\end{equation}\n\\begin{equation}\\label{eqRicciAaAb*}\n \\langle R^{\\bot}(X,Y)\\xi_a,\\xi_{b^*}\\rangle=-2\\langle FX,Y\\rangle\\delta_{ab}+\\langle [A_a,A_{b^*}]X,Y\\rangle,\n\\end{equation}\n\\begin{equation}\\label{eqRicciAa*Ab*}\n \\langle R^{\\bot}(X,Y)\\xi_{a^*},\\xi_{b^*}\\rangle=\\langle [A_{a^*},A_{b^*}]X,Y\\rangle.\n\\end{equation}\n\n\n\\section{\\bf The Position of the Umbilical Normal Vector in the Normal Bundle}\n\\vskip 0.4 true cm\n\nLet $M^n$ be a CR submanifold of maximal CR dimension of $\\mathbf{P}^{\\frac{n+p}{2}}(\\mathbf{C})$. The normal connection $D$ is said to be {\\it flat}, if the curvature tensor $R^{\\bot}$ of $D$ vanishes. In this section we discuss the position of the umbilical normal vector in the normal bundle for this kind of submanifolds. Recall that a normal vector $\\eta$ is said to be {\\it umbilical}, if the shape operator with respect to $\\eta$ is given by\n\\begin{equation}\\label{3AelxMn}\n A_{\\eta}=\\lambda id: T_x(M^n)\\to T_x(M^n),\n\\end{equation}\nwhere $\\lambda=\\langle\\eta,\\zeta\\rangle$, $\\zeta$ is the mean curvature vector, and $id:T_x(M^n)\\to T_x(M^n)$ is the identity map. Specially, if $\\zeta$ is umbilical, then the submanifold $M^n$ is called {\\it pseudo-umbilical}. It is obvious that minimal submanifolds must be pseudo-umbilical (see \\cite{BYC2}).\n\nFrom equations of Ricci (\\ref{eqRicciAAa})-(\\ref{eqRicciAa*Ab*}), we see that flat normal connection implies that\n\\begin{equation}\\label{3AAaAa0}\n [A,A_a]=0,\\ [A,A_{a^*}]=0,\n\\end{equation}\n\\begin{equation}\\label{3AaAAb0}\n [A_a,A_b]=0,\\ [A_{a^*},A_{b^*}]=0,\n\\end{equation}\n\\begin{equation}\\label{3AaAdab}\n [A_{a},A_{b^*}]=2\\delta_{ab}F,\n\\end{equation}\nwhere $a,b=1\\cdots q$.\n\n\\begin{proof}[Proof of Theorem 1.1]\n The result trivially holds when $p=1$. In the following, we assume $p>1$. For the umbilical normal vector $\\eta$, we decompose it as $\\eta=\\eta_1+\\eta_2$, where $\\eta_1\\in span\\{\\xi\\}, \\eta_2\\bot\\xi$. Choose the unit normal vector $\\xi_1$ such that $\\eta_2=|\\eta_2|\\xi_1$, then\n \\begin{equation*}\n \\eta=|\\eta_1|\\xi+|\\eta_2|\\xi_1.\n \\end{equation*}\n From the definition of the umbilicity of $\\eta$ (see (\\ref{3AelxMn})), we deduce that\n \\begin{align*}\n 0 & =[A_{\\eta},A_{1^*}]=[|\\eta_1|A+|\\eta_2|A_1,A_{1^*}]\\notag\\\\\n & =|\\eta_1|[A,A_{1^*}]+|\\eta_2|[A_1,A_{1^*}].\n \\end{align*}\n Substituting (\\ref{3AAaAa0}) and (\\ref{3AaAdab}) into the above formula, we get\n \\begin{equation*}\n 2|\\eta_2|F=0.\n \\end{equation*}\n Since $n>2$ and $rank F=n-1$, we conclude that $|\\eta_2|=0$. Therefore $\\eta=|\\eta|\\xi$.\n\\end{proof}\n\nTo prove Theorem 1.3, we need the following lemmas. The first one is an easy linear algebra result which can be obtained by direct calculations.\n\n\\begin{lem}\n Let $(V,\\langle,\\rangle)$ be an $n$-dimensional inner product space and $f:V\\to V$ be a linear transformation. Suppose there exist $\\lambda\\in \\mathbf{R}$ and $X\\in V$ such that $f(X)=\\lambda X$. If the linear transformations $f_1,f_2:V\\to V$ are both commutative with $f$, then we have\n \\begin{equation*}\n f(f_1X)=\\lambda f_1X,\\ f(f_2X)=\\lambda f_2X,\\ f([f_1,f_2]X)=\\lambda [f_1,f_2]X.\n \\end{equation*}\n\\end{lem}\n\n\n\\begin{lem}\n Let $M^3$ be a 3-dimensional CR submanifold of maximal CR dimension of $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$, $p>1$. If the normal connection is flat, then $p=3$.\n\\end{lem}\n\n\\begin{proof}\n Otherwise\\ $p>3$, then we may choose orthonormal frame\n \\begin{equation*}\n \\xi,\\ \\xi_1,\\ \\xi_2,\\cdots,\\xi_q,\\ \\xi_{1^*},\\ \\xi_{2^*},\\cdots,\\xi_{q^*}\n \\end{equation*}\n of $T^{\\bot}(M^3)$. In the following we consider the eigenvalues and eigenvectors of the shape operator $A_1$. We prove first that if there exists an eigenvalue of $A_1$, say $\\alpha$, such that $U$ is not the eigenvector corresponding to $\\alpha$, then the multiplicity of $\\alpha$ is 2. In fact, since the normal connection is flat, from (\\ref{3AaAAb0}) and (\\ref{3AaAdab}), we have\n \\begin{equation*}\n [A_1,A_2]=0,\\ [A_1,A_{2^*}]=0.\n \\end{equation*}\n According to Lemma 3.1, if $X$ is an eigenvector corresponding to $\\alpha$, then\n \\begin{equation*}\n A_1([A_2,A_{2^*}]X)=\\alpha [A_2,A_{2^*}]X.\n \\end{equation*}\n Noting that $[A_2,A_{2^*}]=2F$, the above formula becomes\n \\begin{equation*}\n A_1(FX)=\\alpha FX.\n \\end{equation*}\n It is easy to see that if $X\\not\\in span\\{U\\}$, then $X$ and $FX$ are linearly independent. Hence the above formula implies the multiplicity of $\\alpha$ is at least 2. This combined with Theorem 1.1 shows that the multiplicity of $\\alpha$ is 2.\n\n Next we prove $A_1$ has two distinct eigenvalues, and $U$ is the eigenvector corresponding to the simple one, while all the holomorphic tangent vectors are eigenvectors corresponding to the other one whose multiplicity is 2. In fact, Theorem 1.1 guarantees that $A_1$ has at least two distinct eigenvalues, say $\\alpha$ and $\\beta$. From the declaration above, we know that $U$ is an eigenvector corresponding to $\\alpha$ or $\\beta$, say $\\beta$ (otherwise dim$M^3\\geqq 4$). Then the eigenvectors of $\\alpha$ are orthonormal to $U$. Also from the declaration above, we see that $\\alpha$ has multiplicity 2.\n\n In entirely the same way we can prove that $A_{1^*}$ also has two distinct eigenvalues, and $U$ is an eigenvector corresponding to the simple one, while all the holomorphic tangent vectors are eigenvectors corresponding to the other one whose multiplicity is 2.\n\n Take a holomorphic tangent vector $X\\not=0$. Assume that\n \\begin{equation*}\n A_1X=\\alpha X,\\ A_{1^*}X=\\alpha^*X.\n \\end{equation*}\n By a direct calculation, we have\n \\begin{equation*}\n [A_1,A_{1^*}]X=0.\n \\end{equation*}\n On the other hand, (\\ref{3AaAdab}) implies that $[A_1,A_{1^*}]X=2FX\\not=0$. This contradiction shows that $p=3$.\n\\end{proof}\n\n\\begin{lem}\n Let $M^3$ be a 3-dimensional CR submanifold of maximal CR dimension of $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$, $p>1$. If the normal connection is flat, then either the distinguished normal vector $\\xi$ is umbilical, or the shape operator $A$ has two distinct eigenvalues. In the latter case, $U$ is an eigenvector corresponding to the simple eigenvalue, while all the holomorphic tangent vectors are eigenvectors corresponding to the eigenvalue with multiplicity 2. In this case, $U$ is also the eigenvector of $A_1$ and $A_{1^*}$.\n\\end{lem}\n\n\\begin{proof}\n From Lemma 3.2, we know that $p=3$. Choose orthonormal frame $\\xi,\\ \\xi_1,\\ \\xi_{1^*}$ of $T^{\\bot}(M^3)$. Since the normal connection is flat, from (\\ref{3AAaAa0}) and (\\ref{3AaAdab}), we have\n \\begin{equation}\\label{flatcom}\n [A,A_1]=0,\\ [A,A_{1^*}]=0,\\ [A_1,A_{1^*}]=2F.\n \\end{equation}\n By the same discussion as in the proof of Lemma 3.2, we know that if $A$ has at least two distinct eigenvalues, then $A$ has two distinct eigenvalues and $U$ is an eigenvector corresponding to the simple eigenvalue, while all the holomorphic tangent vectors are eigenvectors corresponding to the eigenvalue with multiplicity 2. Assume that $AU=\\mu U$. From Lemma 3.1 and (\\ref{flatcom}), we have\n \\begin{equation*}\n A(A_1U)=\\mu A_1U,\\ A(A_{1^*}U)=\\mu A_{1^*}U.\n \\end{equation*}\n Noting that $\\mu$ is the simple eigenvalue of $A$ and $U$ is the corresponding eigenvector, it follows that there exist $\\mu_1,\\mu_{1^*}\\in\\mathbf{R}$, such that $A_1U=\\mu_1U,\\ A_{1^*}U=\\mu_{1^*}U$, which imply that $U$ is also the eigenvector of $A_1$ and $A_{1^*}$.\n\\end{proof}\n\nWith the above three lemmas, we can prove Theorem 1.3.\n\n\\begin{proof}[Proof of Theorem 1.3]\n Lemma 3.2 shows that $p=3$. Now we prove $\\xi$ is umbilical. Otherwise, from Lemma 3.3, we know that $A$ has two distinct eigenvalues, say $\\lambda,\\mu$. Assume $\\mu$ is the simple one, then\n \\begin{equation}\\label{3AUmZlZ}\n AU=\\mu U,\\ AZ=\\lambda Z,\n \\end{equation}\n where $Z$ is any holomorphic tangent vector of $M^3$.\n\n Let $\\zeta$ be the mean curvature vector, we decompose it as $\\zeta=\\zeta_1+\\zeta_2$, where $\\zeta_1\\in span\\{\\xi\\}, \\zeta_2\\bot\\xi$. Choose the unit normal vector $\\xi_1$ such that $\\zeta_2=|\\zeta_2|\\xi_1$, then\n \\begin{align*}\n \\zeta = & |\\zeta_1|\\xi+|\\zeta_2|\\xi_1\\\\\n = & \\frac{1}{3}({\\rm trace} A)\\xi+\\frac{1}{3}({\\rm trace} A_1)\\xi_1+\\frac{1}{3}({\\rm trace} A_{1^*})\\xi_{1^*}.\n \\end{align*}\n This implies that\n \\begin{equation}\\label{3traA10}\n {\\rm trace} A=3|\\zeta_1|,\\ {\\rm trace} A_1=3|\\zeta_2|,\\ {\\rm trace} A_{1^*}=0.\n \\end{equation}\n Combining (\\ref{trAa*}) and (\\ref{3traA10}), we see that\n \\begin{equation}\\label{3s1U3z2}\n s_1(U)=0,\\ s_{1^*}(U)=3|\\zeta_2|.\n \\end{equation}\n Further, it follows from Lemma 3.3, (\\ref{sa*}) and (\\ref{sa}) that\n \\begin{equation}\\label{3A1UUU0}\n A_{1^*}U=\\langle A_{1^*}U,U\\rangle U=-s_1(U)U=0,\n \\end{equation}\n \\begin{equation}\\label{3A1UZ2U}\n A_{1}U=\\langle A_{1}U,U\\rangle U=s_{1^*}(U)U=3|\\zeta_2|U.\n \\end{equation}\n Then for any $X\\in T(M^3),X\\bot U$, we have\n \\begin{equation}\\label{3s1XUX0}\n s_1(X)=-\\langle A_{1^*}U,X\\rangle=0,\\ s_{1^*}(X)=\\langle A_1U,X\\rangle=0.\n \\end{equation}\n\n Note that (\\ref{3A1UUU0}) implies $0$ is an eigenvalue of $A_{1^*}$. According to Theorem 1.1, there must exist a non-zero eigenvalue of $A_{1^*}$, say $\\alpha$. Then (\\ref{3traA10}) shows that $-\\alpha$ is also an eigenvalue of $A_{1^*}$. Assume that $X\\in T(M^3), X\\bot U, |X|=1$, and\n \\begin{equation}\\label{A1*X}\n A_{1^*}X=\\alpha X.\n \\end{equation}\n Write $Y=FX$, then\n \\begin{equation}\\label{A1*Y}\n A_{1^*}Y=-\\alpha Y.\n \\end{equation}\n From (\\ref{Aa}),(\\ref{3s1XUX0}),(\\ref{A1*X}),and (\\ref{A1*Y}), we have\n \\begin{equation}\\label{3A1XYAX}\n A_1X=-\\alpha Y,\\ A_1Y=-\\alpha X.\n \\end{equation}\n By a direct calculation, one can easily get\n \\begin{equation}\\label{A1A1*com}\n [A_1,A_{1^*}]X=-2\\alpha^2 Y.\n \\end{equation}\n On the other hand, it follows from (\\ref{3AaAdab}) that\n \\begin{equation}\\label{A1A1*comX}\n [A_1,A_{1^*}]X=2FX=2Y.\n \\end{equation}\n Comparing (\\ref{A1A1*com}) and (\\ref{A1A1*comX}), we get $\\alpha^2=-1$. This is impossible, since the shape operator $A_{1*}$ is symmetric and its eigenvalues are all real numbers. This contradiction shows that $\\xi$ is umbilical.\n\n\n\\end{proof}\n\n\n\n\\section{\\bf None Existence of a Class of CR Submanifolds of Maximal CR Dimension of $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$ with Flat Normal Connection}\n\\vskip 0.4 true cm\n\nIn this section we prove the non-existence of 3-dimensional pseudo-umbilical CR submanifolds of maximal CR dimension of $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$ with flat normal connection. Otherwise, let $M^3$ be such a submanifold. We first study the position of the mean curvature vector $\\zeta$ in the normal bundle.\n\n\\begin{lem}\n Let $M^3$ be a 3-dimensional pseudo-umbilical CR submanifolds of maximal CR dimension of $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$, $p>1$. If the normal connection is flat, then the mean curvature vector $\\zeta$ is in the direction of $\\xi$.\n\\end{lem}\n\n\\begin{proof}\n We decompose $\\zeta$ as $\\zeta=\\zeta_1+\\zeta_2$, where $\\zeta_1\\in span\\{\\xi\\},\\ \\zeta_2\\bot\\xi$. We need to prove $\\zeta_2=0$. Otherwise, $\\zeta_2\\not=0$. From Theorem 1.1 we see that $\\zeta_2$ is not umbilical. From Theorem 1.3 we know that $\\zeta_1$ is umbilical. Hence $\\zeta$ is not umbilical. This contradicts our assumption that $M^3$ is pseudo-umbilical. Therefore, $\\zeta_2=0$, i.e.,$\\zeta\\in span\\{\\xi\\}$.\n\\end{proof}\n\n\\begin{remark}\n The method we used in the proof of Lemma 4.1 is due to B.Y.Chen who, in \\cite{BYCGL}, studied the umbilical normal vectors of submanifolds of a submanifold.\n\\end{remark}\n\nFrom Theorem 1.3, $p=3$. So\n\\begin{equation*}\n \\zeta = \\frac{1}{3}({\\rm trace} A)\\xi+\\frac{1}{3}({\\rm trace} A_1)\\xi_1+\\frac{1}{3}({\\rm trace} A_{1^*})\\xi_{1^*}.\n\\end{equation*}\nCombined this with Lemma 4.1, we have\n\\begin{equation}\\label{4traA10}\n {\\rm trace} A=3|\\zeta|,\\ {\\rm trace} A_1=0,\\ {\\rm trace} A_{1^*}=0.\n\\end{equation}\nFurther, it follows from (\\ref{trAa*}) that\n\\begin{equation}\\label{4s1U1U0}\n s_1(U)=0,\\ s_{1^*}(U)=0.\n\\end{equation}\n\n\\begin{lem}\n Let $M^3$ be a 3-dimensional pseudo-umbilical CR submanifolds of maximal CR dimension of $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$, $p>1$. If the normal connection is flat, then $A_1U$ is a non-zero holomorphic tangent vector of $M^3$.\n\\end{lem}\n\n\\begin{proof}\n From (\\ref{Aa}) and (\\ref{4s1U1U0}),\n \\begin{equation*}\n A_1U=-FA_{1^*}U.\n \\end{equation*}\n Then the first formula of (\\ref{FU}) implies that $A_1U$ is orthogonal to $U$, which shows that $A_1U$ is a holomorphic tangent vector. In the following, we prove $A_1U\\not=0$. Otherwise, $A_1U=0$. Combining (\\ref{Aa*}) and (\\ref{4s1U1U0}), we also have\n \\begin{equation*}\n A_{1^*}U=0.\n \\end{equation*}\n Then (\\ref{sa*}) and (\\ref{sa}) give that\n \\begin{equation}\\label{4s10s10}\n s_1=0,\\ s_{1^*}=0.\n \\end{equation}\n From (\\ref{4traA10}) and Theorem 1.1, we see that $A_{1^*}$ has non-zero eigenvalues $\\alpha$ and $-\\alpha$. By the same discussion as in the latter part of the proof of Theorem 1.3, one can deduce that $\\alpha^2=-1$ which contradicts the fact that $\\alpha$ is a real number. So $A_1U\\not=0$. This completes the proof.\n\\end{proof}\n\nNow write\n\\begin{equation}\\label{4XA1YFX}\n X=A_1U,\\ Y=FX.\n\\end{equation}\nFrom (\\ref{4s1U1U0}) and (\\ref{Aa*}), it is easy to see that\n\\begin{equation}\\label{Y}\n Y=FX=FA_1U=A_{1^*}U.\n\\end{equation}\nNote that $\\{X,Y,U\\}$ are orthogonal to each other.\n\n\\begin{lem}\n With respect to the frame $\\{X,Y,U\\}$ chosen above, we have\n \\begin{equation*}\n |X|^2=|Y|^2=1,\n \\end{equation*}\n \\begin{equation*}\n s_1(X)=0,\\ s_1(Y)=-1,\\ s_{1^*}(X)=1, s_{1^*}(Y)=0,\n \\end{equation*}\n and the mean curvature $|\\zeta|=constant$.\n\\end{lem}\n\n\\begin{proof}\n From (\\ref{sa*}), (\\ref{sa}), (\\ref{4XA1YFX}) and (\\ref{Y}), we have\n \\begin{equation}\\label{4s1XYX0}\n s_1(X)=-\\langle A_{1^*}U,X\\rangle=-\\langle Y,X\\rangle=0,\n \\end{equation}\n \\begin{equation}\\label{4s1YXY0}\n s_{1^*}(Y)=\\langle A_{1}U,Y\\rangle=\\langle X,Y\\rangle=0,\n \\end{equation}\n \\begin{equation}\\label{4s1YYY2}\n s_1(Y)=-\\langle A_{1^*}U,Y\\rangle=-|Y|^2,\n \\end{equation}\n \\begin{equation}\\label{4s1XXX2}\n s_{1^*}(X)=\\langle A_{1}U,X\\rangle=|X|^2.\n \\end{equation}\nApplying (\\ref{4s1U1U0}),(\\ref{Y}) and (\\ref{4s1XYX0}) to the equation of Codazzi (\\ref{equationCodazziA}), we get\n\\begin{equation}\\label{4XAU1XY}\n (\\nabla_XA)U-(\\nabla_UA)X=-Y+s_{1^*}(X)Y.\n\\end{equation}\nOn the other hand, we calculate\n\\begin{align}\\label{4XAUUzX}\n & (\\nabla_XA)U-(\\nabla_UA)X \\notag\\\\\n = & \\nabla_X(|\\zeta|U)-A(\\nabla_XU)-\\nabla_U(|\\zeta|X)+A(\\nabla_UX)\\notag\\\\\n = & (X|\\zeta|)U-(U|\\zeta|)X.\n\\end{align}\nIn the above calculation we use the fact that $AZ=|\\zeta|Z$ for any tangent vector $Z$ of $M^3$, which can be deduced from the pseudo-umbilicity of $M^3$ and Lemma 4.1. Combining (\\ref{4XAU1XY}) and (\\ref{4XAUUzX}), we get\n\\begin{equation*}\n (X|\\zeta|)U-(U|\\zeta|)X+(1-s_{1^*}(X))Y=0.\n\\end{equation*}\nSo\n\\begin{equation}\\label{4Xz01X1}\n X|\\zeta|=0,\\ U|\\zeta|=0,\\ s_{1^*}(X)=1.\n\\end{equation}\nSimilarly, applying the equation of Codazzi (\\ref{equationCodazziA}) to $(\\nabla_YA)U-(\\nabla_UA)Y$, we get\n\\begin{equation}\\label{4Yz01Y1}\n Y|\\zeta|=0,\\ U|\\zeta|=0,\\ s_{1}(Y)=-1.\n\\end{equation}\nFrom (\\ref{4s1YYY2}), (\\ref{4s1XXX2}), (\\ref{4Xz01X1}), and (\\ref{4Yz01Y1}), we know that\n\\begin{equation*}\n |X|^2=1,\\ |Y|^2=1,\\ |\\zeta|=constant.\n\\end{equation*}\n\\end{proof}\n\n\n\\begin{lem}\n For the holomorphic tangent vector $X,Y$ defined by (\\ref{4XA1YFX}) and (\\ref{Y}), we have\n \\begin{equation*}\n A_1X=U,\\ A_1Y=0,\\ A_{1^*}X=0,\\ A_{1^*}Y=U.\n \\end{equation*}\n \\end{lem}\n\n\\begin{proof}\n From (\\ref{Aa*}), (\\ref{Aa}) and Lemma 4.3, we have\n \\begin{equation}\\label{4A1X1XU}\n A_1X=-FA_{1^*}X+U,\n \\end{equation}\n \\begin{equation}\\label{4A1YA1Y}\n A_1Y=-FA_{1^*}Y,\n \\end{equation}\n \\begin{equation}\\label{4A1XA1X}\n A_{1^*}X=FA_{1}X,\n \\end{equation}\n \\begin{equation}\\label{4A1Y1YU}\n A_{1^*}Y=FA_{1}Y+U.\n \\end{equation}\n From (\\ref{3AaAdab}), we get\n \\begin{equation*}\n [A_1,A_{1^*}]U=0.\n \\end{equation*}\n Substituting (\\ref{4XA1YFX}) and (\\ref{Y}) into the above formula, we have\n \\begin{equation}\\label{4A1YA1X}\n A_1Y=A_{1^*}X.\n \\end{equation}\n From (\\ref{4A1X1XU}), (\\ref{4A1YA1X}) and the skew-symmetry of $F$, we calculate\n \\begin{equation}\\label{4A1X1YY}\n \\langle A_1X,X\\rangle=-\\langle FA_{1^*}X,X\\rangle=\\langle A_{1^*}X,Y\\rangle=\\langle A_1Y,Y\\rangle.\n \\end{equation}\n From (\\ref{4traA10}), (\\ref{4s1U1U0}) and (\\ref{sa*}), we calculate\n \\begin{align}\\label{40tr1YY1}\n 0 = & {\\rm trace} A_1=\\langle A_1X,X\\rangle+\\langle A_1Y,Y\\rangle+\\langle A_1U,U\\rangle\\notag\\\\\n = & \\langle A_1X,X\\rangle+\\langle A_1Y,Y\\rangle+s_{1^*}(U)\\notag\\\\\n = & \\langle A_1X,X\\rangle+\\langle A_1Y,Y\\rangle.\n \\end{align}\n Combining (\\ref{4A1X1YY}) and (\\ref{40tr1YY1}), we know that\n \\begin{equation}\\label{4A1XYY0}\n \\langle A_1X,X\\rangle=0,\\ \\langle A_1Y,Y\\rangle=0.\n \\end{equation}\n From (\\ref{4A1XA1X}) and the skew-symmetry of $F$, we calculate\n \\begin{equation}\\label{4A1X1XY}\n \\langle A_{1^*}X,X\\rangle=\\langle FA_1X,X\\rangle=-\\langle A_1X,Y\\rangle.\n \\end{equation}\n On the other hand, from (\\ref{4A1YA1X}),\n \\begin{equation}\\label{4A1X1XY2}\n \\langle A_{1^*}X,X\\rangle=\\langle A_1Y,X\\rangle=\\langle A_1X,Y\\rangle.\n \\end{equation}\n Combining (\\ref{4A1X1XY}) and (\\ref{4A1X1XY2}), we have\n \\begin{equation}\\label{4A1XXY0}\n \\langle A_{1^*}X,X\\rangle=0,\\ \\langle A_{1}X,Y\\rangle=0.\n \\end{equation}\n From (\\ref{4traA10}), (\\ref{4s1U1U0}), (\\ref{4A1XXY0}) and (\\ref{sa}), we calculate\n \\begin{align}\\label{40tr1YY}\n 0 = & {\\rm trace} A_{1^*}=\\langle A_{1^*}X,X\\rangle+\\langle A_{1^*}Y,Y\\rangle+\\langle A_{1^*}U,U\\rangle\\notag\\\\\n = & \\langle A_{1^*}X,X\\rangle+\\langle A_{1^*}Y,Y\\rangle-s_{1}(U)\\notag\\\\\n = & \\langle A_{1^*}Y,Y\\rangle.\n \\end{align}\n From (\\ref{4A1YA1X}) and (\\ref{4A1XYY0}), we have\n \\begin{equation}\\label{4A1YYY0}\n \\langle A_{1^*}Y,X\\rangle=\\langle A_{1^*}X,Y\\rangle=\\langle A_1Y,Y\\rangle=0.\n \\end{equation}\n Noting that $\\{X,Y,U\\}$ are orthonormal, by using (\\ref{4A1XYY0}), (\\ref{4A1XXY0}) and Lemma 4.3, we get\n \\begin{align*}\n A_1X = & \\langle A_{1}X,X\\rangle X+\\langle A_{1}X,Y\\rangle Y+\\langle A_{1}X,U\\rangle U\\notag\\\\\n = & s_{1^*}(X)U=U.\n \\end{align*}\n Similarly, by using (\\ref{sa*}), (\\ref{sa}), (\\ref{4A1XYY0}), (\\ref{4A1XXY0}), (\\ref{40tr1YY}), (\\ref{4A1YYY0}) and Lemma 4.3, we also have\n \\begin{equation*}\n A_1Y=0,\\ A_{1^*}X=0,\\ A_{1^*}Y=U.\n \\end{equation*}\n\\end{proof}\n\nNow we can prove Theorem 1.4.\n\n\\begin{proof}[Proof of Theorem 1.4]\n Since the co-dimension $p>1$, it follows from Theorem 1.3 that $p=3$. Let $M^3$ be a pseudo-umbilical CR submanifold of maximal CRdimension of $\\mathbf{P}^{\\frac{3+p}{2}}(\\mathbf{C})$ whose normal connection is flat. Let $X,Y$ be the holomorphic tangent vectors defined by (\\ref{4XA1YFX}) and (\\ref{Y}). From (\\ref{3AaAdab}), we have\n \\begin{equation}\\label{4A1AX2Y}\n [A_1,A_{1^*}]X=2Y.\n \\end{equation}\n On the other hand, it follows from Lemma 4.4 and (\\ref{Y}) that\n \\begin{equation}\\label{4A1A1UY}\n [A_1,A_{1^*}]X=A_1A_{1^*}X-A_{1^*}A_1X=-A_{1^*}U=-Y.\n \\end{equation}\n This is a contradiction which proves the non-existence of such submanifolds. This completes the proof.\n\\end{proof}\n\n\n\n\n\n\\vskip 0.5 true cm\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1} \\setcounter{equation}{0}}\n\\def$\\bullet${$\\bullet$}\n\\def{\\bar Q}_1{{\\bar Q}_1}\n\\def{\\bar Q}_p{{\\bar Q}_p}\n\n\\def\\quad{\\quad}\n\n\\defB_\\circ{B_\\circ}\n\n\n\\let\\a=\\alpha \\let\\bigskip=\\beta \\let\\g=\\gamma \\let\\partial=\\delta \\let\\e=\\epsilon\n\\let\\c=\\chi \\let\\th=\\theta \\let\\k=\\kappa\n\\let\\l=\\lambda \\let\\m=\\mu \\let\\n=\\nu \\let\\x=\\xi \\let\\rightarrow=\\rho\n\\let\\s=\\sigma \\let\\tilde=\\tau\n\\let\\vp=\\varphi \\let\\vep=\\varepsilon\n\\let\\w=\\omega \\let\\G=\\Gamma \\let\\D=\\Delta \\let\\Th=\\Theta\n \\let\\P=\\Pi \\let\\S=\\Sigma\n\n\n\n\\def{1\\over 2}{{1\\over 2}}\n\\def\\tilde{\\tilde}\n\\def\\rightarrow{\\rightarrow}\n\\def\\nonumber\\\\{\\nonumber\\\\}\n\\let\\bm=\\bibitem\n\\def{\\tilde K}{{\\tilde K}}\n\\def\\bigskip{\\bigskip}\n\n\\let\\partial=\\partial\n\n\n\\begin{flushright}\n\\end{flushright}\n\\vspace{20mm}\n\\begin{center}\n{\\LARGE Comments on black holes I:}\n\\\\\n\\vspace{5mm}\n{\\LARGE The possibility of complementarity}\n\\\\\n\\vspace{18mm}\n{\\bf Samir D. Mathur ~and~ David Turton }\\\\\n\n\\vspace{8mm}\nDepartment of Physics,\\\\ The Ohio State University,\\\\ Columbus,\nOH 43210, USA\\\\ \\vskip .2 in mathur.16@osu.edu\\\\turton.7@osu.edu\n\\vspace{4mm}\n\\end{center}\n\\vspace{10mm}\n\\thispagestyle{empty}\n\\begin{abstract}\n\\bigskip\nWe comment on a recent paper of Almheiri, Marolf, Polchinski and Sully who argue against black hole complementarity based on the claim that an infalling observer `burns' as he attempts to cross the horizon. We show that measurements made by an infalling observer outside the horizon are statistically identical for the cases of vacuum at the horizon and radiation emerging from a stretched horizon. This forces us to follow the dynamics all the way to the horizon, where we need to know the details of Planck-scale physics. We note that in string theory the fuzzball structure of microstates does not give any place to `continue through' this Planck regime. AMPS argue that interactions near the horizon preclude traditional complementarity. But the conjecture of `fuzzball complementarity' works in the opposite way: the infalling quantum is absorbed by the fuzzball surface, and it is the resulting dynamics that is conjectured to admit a complementary description. \n\n\n\n\\bigskip\n\n\n\n\n\n\n\\end{abstract}\n\\vskip 1.0 true in\n\n\\newpage\n\n\\numberwithin{equation}{section}\n\\setcounter{tocdepth}{1}\n\\tableofcontents\n\n\\baselineskip=16pt\n\\parskip=3pt\n\n\\section{Introduction}\n \n The quantum theory of black holes has proven to be rich territory for the exploration of the most fundamental laws of physics. The discoveries of black hole entropy \\cite{bek}, and Hawking radiation \\cite{hawking} provided deep links between gravity and thermodynamics, while raising a serious problem in the form of the information paradox. One suggestion that arose in this context was the notion of black hole complementarity \\cite{complementarity}. String theory provides a microscopic explanation for the entropy of black holes \\cite{sv}, and the fuzzball structure of microstates provides a solution to the information paradox \\cite{lm4,fuzzballs,fuzzball3,fuzzball4,cern,plumpre,plumberg,otherfcrefs}.\n \n Recently there have appeared several papers discussing the relations between the information paradox, entanglement theorems, complementarity and other issues involving the quantum theories of black holes \\cite{amps,ampsfollowups}\\footnote{See also the earlier work of \\cite{Braunstein:2009my}.}. Since there are several interrelated issues in the area of black holes, we have split our discussion into a set of papers, each addressing a different question. In this article we comment on some of the arguments used in the paper of Almheiri, Marolf, Polchinski and Sully (AMPS) \\cite{amps} and argue that they do not address the conjecture of `fuzzball complementarity' developed in \\cite{plumpre,plumberg,otherfcrefs}.\n \n We note that the fuzzball program provides a consistent picture of all issues in the quantum dynamics of black holes (see \\cite{reviews} for reviews). We will keep this fact at the back of our mind, since in many cases the fuzzball description provides us an explicit model to judge the validity of abstract arguments. \n \n We begin with some definitions and basic facts about black holes and the information paradox. We then make two observations:\n \n\n \n (a) It is often assumed that if an infalling observer `hits something' at the horizon, then there cannot be a `complementary' description where he goes through. While traditional complementarity may have this feature, the kind of complementarity suggested by fuzzballs is different. We use a toy example provided by AdS\/CFT duality to observe that in one description an infalling quantum `breaks up', while in another description it continues its trajectory unscathed. We note that the case of the black hole is somewhat different from the AdS\/CFT case, and explain how complementarity can arise for hard-impact processes involving quanta with energy $E\\gg kT$ falling freely into the black hole\\footnote{Here $E$ refers to the conserved Killing energy of the infalling quantum, and $T$ is the temperature of the black hole as measured from infinity.}. \n \n\n \n (b) One might think that an observer falling into the traditional black hole sees nothing as he falls up to the horizon, but an observer falling towards a body radiating `real quanta' from a stretched horizon would get `burnt' by the highly energetic photons encountered close to this horizon. We show that observations of Hawking quanta made outside the horizon actually yield similar results in both cases. Switching off a detector before crossing the horizon of a traditional black hole creates excitations from vacuum fluctuations, and these excitations have the same spectrum as excitations created by `real quanta' from a stretched horizon.\n \n\n \n We then address the argument made in AMPS \\cite{amps}. In brief outline, the AMPS argument goes as follows: \n\\begin{enumerate}[(i)]\n\t\\item If Hawking evaporation is unitary, then the state near the horizon is not the vacuum in an infalling observer's frame, but involves high-energy excitations.\n\t\\vspace{-3mm}\n\t\\item If there are high-energy excitations near the horizon, then an infalling observer will measure physical high energy quanta emerging from the black hole, and get burnt.\n\t\\vspace{-3mm}\n\t\\item If the observer gets burnt, then we cannot have any complementary description where he falls through without noticing anything at the horizon.\n\\end{enumerate}\n \n \n From points (a) and (b) above, we find that the AMPS gedanken experiment does not lead to the conclusions they suggest. If one wishes to avoid Planck-scale physics, then one should restrict to measurements made outside the stretched horizon. For such measurements\n point (b) shows that an infalling observer will see the traditional black hole and a radiating stretched horizon as statistically similar systems.\n The underlying reason for this equivalence is that there is too little time for him to detect the Hawking quanta before he reaches the horizon. More importantly, point (a) shows that even if the infalling observer were to hit the stretched horizon violently, this fact would not by itself invalidate the possibility of complementarity; in fact it is this very interaction that is expected to admit a complementary description.\n \n In the Discussion (Section \\ref{secsix}) we summarize the essential physics involved in the conjecture of fuzzball complementarity to show precisely why it is not addressed by the AMPS argument. \n \n The reader who is already familiar with fuzzballs and the conjecture of fuzzball complementarity may skip directly to Section \\ref{secfour}.\n\n \n \n\\section{The information paradox and the fuzzball proposal}\\label{basics}\n \n In this section we review the resolution of the information paradox through the fuzzball construction in string theory. Though the later arguments will be more abstract, the steps below will help us decide the validity of these arguments.\n \n \n \\bigskip\n \n\\noindent{ {\\bf (a) The traditional black hole}}\n\n\n The information paradox arises from the way Hawking radiation is emitted from the {\\it traditional black hole}. We define the traditional black hole as follows. There is a horizon, and a neighbourhood of the horizon with the following property. One can choose good slices in this neighbourhood, and in these good coordinates physics is `normal'. Here `normal' physics means exactly what we mean by normal physics in the lab: evolution of long wavelength modes ($\\lambda\\gg l_p$) is given by local quantum field theory on curved space, with corrections controlled by a small parameter $\\epsilon$. These corrections can come from any quantum gravity effect, local or nonlocal, and all we require is that $\\epsilon\\rightarrow 0$ as $M\\rightarrow \\infty$, where $M$ is the mass of the black hole. \n\n\\bigskip\n\n\n\\noindent{ {\\bf (b) The information paradox}}\n\n The traditional black hole arose from a study of gravitational collapse that leads to the Schwarzschild metric\n\\begin{equation}\nds^2=-(1-{2M\\over r})dt^{2}+{dr^2\\over 1-{2M\\over r}}+r^{2}{d\\Omega_2^{2}}\n\\label{one}\n\\end{equation}\nIf we use semiclassical gravity to follow the evolution of quantum modes during the collapse, we get the traditional black hole. We have the a vacuum region around the horizon which indeed gives `lab' physics in a good slicing (i.e., in Kruskal coordinates). Evolution of vacuum modes at this horizon leads to entangled pairs being created, with one member of the pair staying in the black hole and the other escaping to infinity as Hawking radiation. The entangled pair can be modeled for simplicity by \\cite{cern}\\footnote{Further analysis of such `bit models' can be found in \\cite{plumpre,plumberg,bits,giddings}.}\n\\begin{equation}\n|\\psi\\rangle_{pair}={1\\over \\sqrt{2}}\\left ( |0\\rangle_{in}|0\\rangle_{out}+|1\\rangle_{in}|1\\rangle_{out}\\right )\n\\label{two}\n\\end{equation}\nThe entanglement between the inside and outside grows by $\\ln 2$ with each emission. Near the endpoint of evaporation this would leave just two possibilities: information loss or a remnant \\cite{hawking,cern}. Both of these look unsatisfactory; we would like a pure state of Hawking radiation carrying all the information of the black hole. \n\n\\bigskip\n\n\\noindent{ {\\bf (c) The theorem controlling small corrections}}\n\n The problem would be resolved if gravitational collapse led to a state other than the traditional black hole. But the traditional black hole solution appeared to admit no deformations, leading to the phrase `black holes have no hair'. Exactly the same problem holds for black holes in AdS. Thus AdS\/CFT duality cannot by itself help to resolve the problem (for a detailed discussion of this issue, see \\cite{cern,conflicts}).\n\nThis situation led many string theorists to the following belief. Hawking computed the pair creation at leading order, but there can always be small quantum gravity corrections to the wavefunction (\\ref{two})\n\\begin{equation}\n|\\psi\\rangle_{pair}={1\\over \\sqrt{2}}\\left ( |0\\rangle_{in}|0\\rangle_{out}+|1\\rangle_{in}|1\\rangle_{out}\\right )+\\epsilon {1\\over \\sqrt{2}}\\left ( |0\\rangle_{in}|0\\rangle_{out}-|1\\rangle_{in}|1\\rangle_{out}\\right )\n\\label{twenty}\n\\end{equation}\nwhere we have added a small amount of an orthogonal state for the pair. The correction $\\epsilon$ for each pair must be small since the horizon geometry is smooth, but the number of emitted quanta is large ($\\sim (M\/m_p)^2$), and the net effect of the small corrections may accumulate in such a way that the overall state of the radiation would not be entangled with the black hole. \n\n\n\n\nBut in \\cite{cern} it was shown that this hope is false; the change in entanglement $\\delta S_{ent}$, compared to the entanglement $S_{ent}$ of the leading-order Hawking process, is bounded by\n\\begin{equation}\n{\\delta S_{ent}\\over S_{ent}}<2\\epsilon \\,.\n\\label{three}\n\\end{equation}\nThis inequality is the essential reason why the Hawking argument has proved so robust over the years -- no small corrections can save the situation. We will make use of (\\ref{three}) many times; many arguments in the other papers we discuss are also based on this inequality. \n\n\\bigskip\n\n\n\n\\noindent{ {\\bf (d) The fuzzball structure of microstates}}\n\n\n\n In \\cite{emission} it was found that a bound state in string theory {\\it grows} in size with the number of branes in the bound state and with the coupling, so that its wavefunctional is always spread over a radius which is order the Schwarzschild radius. This growth in size is a very stringy effect; it arises from the phenomenon of `fractionation' \\cite{dasmathur} which uses the extended nature of fundamental objects in the theory. Such horizon sized wavefunctionals are termed `fuzzballs'. \n\n\nThe size of fuzzball states is estimated by using the entropy of brane bound states, together with the physics of fractionation. Thus this size estimate involves {\\it all} the states of the black hole. To study the properties of fuzzballs further, it is useful to look at states where we place `many quanta in the same mode'. This is analogous to black body radiation, where placing a large number $N$ of quanta in the same harmonic gives a laser beam, with quantum fluctuations suppressed as $\\sim {1\\over N}$.\\footnote{This study of low fluctuation states has led some to be confused about the nature of fuzzballs. They ask: are fuzzballs just solutions to supergravity or do they involve stringy degrees of freedom? As can be seen from the above discussion, there is no fundamental classical\/quantum divide between states; all we can do is look at states with small or large fluctuations. In particular the non-BPS states studied in \\cite{ppwave} using the pp-wave technique were given in terms of strings placed in a fuzzball geometry. The correct question is not; `how messy is the fuzzball'; the only relevant question is `do we get a traditional black hole (with `lab physics' around a horizon) or do we not'. The only feature common to all fuzzballs is that we never form a traditional horizon.} One find that the fuzzballs generate a spacetime that resembles the traditional black hole far away from the horizon, but which ends\\footnote{The word `end' should be understood as follows. In all known examples, individual black hole microstates are described by solutions of string theory involving smooth geometry far from the black hole, no horizon, and thus no interior (where `interior' refers to the space-time inside the horizon of the corresponding classical black hole solution). For generic states, the structure at the scale of the would-be horizon may be expected to have Planck-scale degrees of freedom (see also Footnote 4). In general, since there is no interior, we say that space-time ends outside the would-be horizon.} in a set of string theory sources before reaching the horizon \\cite{lm4,fuzzballs,fuzzball3}. This is pictured in Fig.\\;\\ref{fdiss}.\\footnote{For the two-charge BPS black hole, all states have been shown to be fuzzballs. For other black holes, some fraction of the states have been constructed, and in each case have been found to be fuzzballs.}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.4]{fdiss.eps}\n\\caption{(a) The traditional black hole; small corrections at the horizon {\\it cannot} get information out in the Hawking radiation. (b) The fuzzball picture of black hole microstates; spacetime ends in stringy theory sources just before the horizon is reached. }\n\\label{fdiss}\n\\end{center}\n\\vspace{-2mm}\n\\end{figure}\n\n\\bigskip\n\n\n\n\\noindent{\\bf (e) Resolution of the paradox}\n\nGiven the existence of fuzzballs, the information paradox is resolved as follows. Fuzzballs do not radiate by pair creation from an `information-free horizon'; instead the radiation emerges from the surface of the fuzzball and carries information just like any normal body. This radiation has been explicitly worked out for simple fuzzballs; the rate of radiation agrees exactly with the Hawking emission rate expected for those fuzzballs but the details of the fuzzball state are seen to be imprinted in the spectrum of emitted quanta \\cite{radiation}. \n\nIf we start with a collapsing shell, then its wavefunction spreads over the enormous phase space of fuzzball states \\cite{tunnel}, and then these fuzzball states radiate like any other warm body. The time for this spread can be estimated to be much smaller than the Hawking evaporation time \\cite{rate}\n\\begin{equation}\n t_{fuzzball}\\ll t_{hawking}\n \\label{four}\n \\end{equation}\nThis solves the information paradox. \n\n\n\n\\section{Traditional complementarity vs Fuzzball complementarity}\\label{seccomp}\n\nIn this section we will explain what we mean by having a `complementary description'. We start by giving a toy example: the case of AdS\/CFT duality \\cite{maldacena}. This toy model is new. We briefly recall the traditional notion of complementarity, and then turn to how complementarity is conjectured to arise in the fuzzball description of microstates. This `fuzzball complementarity' has things in common to the toy example of AdS\/CFT duality, but also differs from it in a crucial way.\n\n\\subsection{Toy example of complementarity: AdS\/CFT duality}\\label{ads}\n\nWe start with an example that illustrates what we mean by having a complementary description. In this example an infalling quantum will encounter some degrees of freedom and appear to `go splat'; i.e. get `destroyed'. Yet there will be an alternative description where it continues unscathed. When a description of the latter kind exists, we will say that we have a `complementary' description of the degrees of freedom in the former description.\n\n\n\n\\begin{figure}[!]\n\\includegraphics[scale=.85]{fz2p.eps}\n\\caption{AdS\/CFT duality, traditional complementarity and fuzzball complementarity.}\n\\label{fz2p} \n\\end{figure}\n\n\nConsider IIB string theory compactified on $S^1\\times T^4$. Let $y$ be the coordinate along $S^1$ and $z_1, \\dots z_4$ be the coordinates on $T^4$. \nWe consider a bound state of $n_1$ D1 branes wrapped on $S^1$ and $n_5$ D5 branes wrapped on $S^1\\times T^4$. This bound state is depicted in Fig.\\;\\ref{fz2p}(a), where the direction along the branes is the $S^1$. \n\n\nWe are working in the context of a D-brane bound state in flat space, where in one description we have a CFT coupled to flat space, and in the other description we have a geometry with flat asymptotics and an AdS throat. The degrees of freedom deep inside the AdS throat (on the gravity side) will not play a role in the following.\n\nTo be more specific, we take the AdS radius $R_{AdS}$ to be macroscopically large. On the gravity side, we consider a throat which is very long in units of $R_{AdS}$ (measured by proper distance along a radial geodesic). We fix a CFT location $r=R_{CFT}$ in the usual way. We then consider the trajectory of an infalling quantum along a radial geodesic from say one AdS radius of proper distance above $r=R_{CFT}$ to one AdS radius of proper distance below this location (on the gravity side).\n\nIn the description involving a CFT coupled to flat space, the transition from an infalling graviton in flat space to CFT degrees of freedom is described by the corresponding CFT operator which describes the absorption (see e.g.~\\cite{Avery:2009tu}).\n\n\nA graviton with both indices on the $T^4$ is a scalar in the remaining dimensions. Consider in particular the graviton $h_{12}$, arriving at the brane bound state as shown in Fig.\\;\\ref{fz2p}(a).\n\n\nIn the CFT description, on hitting the brane bound state, the energy of the graviton gets converted to vibrations of the branes (open strings);\\footnote{The actual evolution on the branes is more complicated when we consider interactions in the CFT, but this simple picture illustrates the point we wish to make. Note that we are considering the gravity description at weak coupling, and so the CFT description is at strong coupling. But the important fact is that there are {\\it two} descriptions at the same coupling; one using strongly interacting CFT of freedom, and one using the spin 2 graviton and higher closed string modes. In the former description the incoming $h_{12}$ appears to break up into pieces, while in the latter it remains intact.} a vibration polarized in the direction $X^1$ moves up along the $S^1$ and a vibration polarized in the direction $X^2$ moves down the $S^1$ \\cite{comparing,interactions,malstrom}.\n\nOne may say that the graviton has `gone splat' on hitting the branes, to such an extent that it has split into two parts, $X^1$ and $X^2$. These two products obtained after impact certainly do not look like the single graviton $h_{12}$ that was arriving towards the brane bound state\\footnote{At strong coupling, the graviton is absorbed into degrees of freedom of the strongly coupled CFT, where we cannot make precise statements. Nevertheless, the graviton may still be described as having `gone splat' in the sense that, in the CFT description, it is no longer a graviton and has been converted into strongly coupled CFT degrees of freedom.}. But as we well know, there is an alternative description of this physics where we replace the brane bound state by an AdS region. In the latter description, the graviton $h_{12}$ falls smoothly into the AdS region, remaining as a single entity $h_{12}$ (Fig.\\;\\ref{fz2p}(b)). We can call this latter description a `complementary' description of the interaction with the D1D5 branes. Now we can ask our question: when the incoming graviton broke up on the D1D5 brane bound state, did it go `splat' or not?\n\nTo better understand how to interpret this situation, we look at a more detailed example where we start with {\\it two} gravitons, $h_{12}$ and $h_{34}$, separated by a distance $D$. We can think of this pair of gravitons as being an `object'; if the separation of the gravitons is increased or decreased, we can say that the object has `been damaged' and `feels pain'. \n\nAt zero coupling in the CFT, the evolution of these gravitons proceeds as follows \\cite{lm4}. First $h_{12}$ hits the D1D5 bound state, and changes to excitations $X^1, X^2$ which travel at the speed of light in opposite directions along $y$. At a later time $h_{34}$ hits the bound state and changes to vibrations $X^3, X^4$, again separating at the speed of light. But the separation $D$ between the initial gravitons can be recovered from the open string excitations. Let $y_i$ be the location along the $S^1$ of the excitation $X^i$, for $i=1,2,3,4$. We have\n\\begin{equation}\ny_1=t, ~~y_2=-t, ~~y_3=t-D, ~~y_4=-t+D\n\\end{equation}\nso the value of $D$ is encoded in the vibrations $X^i$ as\n\\begin{equation}\nD={1\\over 2}[(y_1-y_2)-(y_3-y_4)]\n\\end{equation}\nIn the dual gravity description, the two gravitons fall smoothly into the AdS, maintaining their separation $D$ and thus showing no indication of `damage' or `pain'. But given that the CFT description is a faithful copy of the gravity description, and that we can recover the same value $D$ from the CFT, it looks correct to say that there is no damage or pain felt in the brane description either.\\footnote{Again, to make the toy example more accurate, one should consider the CFT at strong coupling. The basic result is unchanged: in the CFT description the incoming graviton is absorbed into degrees of freedom of the strongly coupled CFT, which in no way resemble the incoming gravitons, yet which somehow encode the value of $D$.}\n\nBy contrast, when we throw an object onto a normal concrete wall, we do not expect to find a complementary description. Let us analyze what was special about the D1D5 brane case which did allow for complementarity.\n\n\nIn the D1D5 example, the Hilbert space of the incoming gravitons mapped faithfully into the Hilbert space of vibrations of the branes. That is, if we write the eigenstates of the incoming graviton as $|\\psi_E\\rangle$ and the eigenstates of the D1D5 system as $|\\tilde\\psi_E\\rangle$, then we find\n\\begin{equation}\n\\int\\, dE \\, C(E)\\, |\\psi_E\\rangle ~\\rightarrow~ \\int \\, dE \\, C(E)\\, |\\tilde \\psi_E\\rangle\n\\label{eone}\n\\end{equation}\n The {\\it nature} of the excitations changed completely - they changed from being gravitons to being vibrations of branes - but this is {\\it not} important. What is important is that the amplitude for a given energy remained the same (or approximately the same). A important input for getting a relation like (\\ref{eone}) is that the D1D5 bound state had a very closely spaced set of energy levels. This high density of levels leads to a `fermi-golden-rule' absorption of the graviton, and in such an absorption each incoming energy level $E_k$ transfers its amplitude to energy levels $\\tilde E_k$ that are very close to $E_k$. (In \\cite{interactions} the absorption of the graviton onto the brane bound state was computed by such a fermi-golden rule process.) \n \n\nWhat {\\it does} cause `damage' or `pain' is the situation where the levels available in the absorbing system are not sufficiently continuous. In this situation we will find in general \n\\begin{equation}\n\\int \\, dE \\, C(E)\\, |\\psi_E\\rangle ~\\rightarrow~ \\int \\, dE \\, C'(E) \\, |\\tilde \\psi_E\\rangle, ~~~C(E)\\ne C'(E)\n\\label{eoneq}\n\\end{equation}\nIn particular, a concrete wall will not have the same energy levels as the object hitting it, and so the incoming object will not be mapped faithfully into excitations of the concrete wall. In this situation we do not expect a complementary description of the impact. \n\nTo summarize, we cannot just say: `If we go `splat' on hitting some degrees of freedom, then we cannot have complementarity'. The impact transfers excitation energy to the degrees of freedom that are encountered. To know if we can have a complementary description we have to ask if the Hilbert space of the infalling object maps faithfully into a subspace of the Hilbert space of the encountered degrees of freedom. \n\n\\subsection{Traditional complementarity}\\label{trad}\n\n\nIn the early works on black hole complementarity \\cite{complementarity}, the physics that was proposed is depicted in Fig.\\;\\ref{fz2p}(c),(d). It was assumed that we can place a `stretched horizon' just outside $r=2M$, and that incoming quanta could be taken to interact with degrees of freedom on this stretched horizon. In the complementary description, we have just the smooth infall through the horizon.\n\nThe problem with this proposal is discovered when we ask for the physical origin of the degrees of freedom on the stretched horizon. It was argued that since the Schwarzschild coordinates break down at $r=2M$, there will be violent fluctuations of the gravitational degrees of freedom as we approach $r=2M$. It was further argued that these violent fluctuations are indicative of the fact that physics outside the horizon is self-consistent, and the stretched horizon provides the natural boundary beyond which we need not look.\n\nSuch an argument is, however, unsatisfactory. The breakdown of Schwarzschild coordinates means that we should use better coordinates, not that we are entitled to assume new physics. But there is an even more serious difficulty with this proposal, which we can see by returning to our basic question: how does the information paradox get resolved? There is a `smooth slicing' of the geometry where we see the creation of entangled pairs (\\ref{two}). The defenders of traditional complementarity argued that the inner and outer parts of the horizon should not be considered in the same Hilbert space, since an observer who falls in has strong limitations on how he can communicate with the outside; thus the state (\\ref{two}) makes no sense. But no mechanism was proposed to implement such a drastic change to normal physics. The skeptics of complementarity simply noted that there {\\it is} a good slicing of the geometry which we should use to do physics at the horizon, and with this slicing there appears to be no reason to not have a single Hilbert space that includes both the inner and outer parts of the horizon. \n\nFor these reasons, the traditional picture of complementarity remained an unresolved issue. It is important to note the difference between the traditional black hole case and the example of AdS\/CFT that we presented in Section \\ref{ads}. In the AdS\/CFT example of Fig.\\;\\ref{fz2p}(a),(b), the boundary where we get a complementary description is not a horizon, and there is no particle creation there. Thus we do not have the information problem. But in the case of a black hole there is no way to stop the creation of entangled pairs in any picture where a smooth horizon is assumed, and then we cannot scape the information paradox. As we will see now, the way complementarity can arise with the fuzzball picture in string theory is somewhat different, and needs us to recognize that real degrees of freedom appear at the location of the horizon. \n\n\\subsection{The proposal of fuzzball complementarity}\n\nWith the explicit construction of black hole microstates in string theory (fuzzballs) we find that things work out differently from the traditional picture of complementarity. The general idea of `fuzzball complementarity' is developed in \\cite{plumpre,plumberg,otherfcrefs}. The notion of making spacetime by entanglement \\cite{raamsdonk,israel,eternal} is very useful in this approach. Here we just give an outline of how things work:\n\n\\parskip=10pt\n\n(a) Complementarity does {\\it not} arise because of a choice of coordinates (Schwarzschild vs Kruskal). Instead, the construction of microstates is fully covariant.\n\n(b) In the traditional black hole we have {\\it vacuum} around the horizon. But in string theory, spacetime has a `boundary' where it ends with in a set of string theory sources just outside $r=2M$, before the horizon is reached. The details of these sources encode the choice of microstate. \n\n(c) Hawking radiation arises as quanta radiated from the details of microstate structure near the boundary. For simple microstates this radiation has been explicitly computed, and it arises from `ergoregion emission' \\cite{radiation} near the boundary. The details of the ergoregion structure depend on the choice of microstate. \n\n\n(d) Since we have `real' degrees of freedom at the horizon, the $E\\sim kT$ quanta radiated from the microstate are able to carry out the information of the microstate. We {\\it cannot} have a complementary picture where we replace the physics of such quanta by the vacuum physics seen at the horizon of the traditional black hole. In this way our complementarity differs from traditional complementarity.\nWhat we have to do is make a distinction between $E\\sim kT$ quanta (relevant for the information problem) and $E\\gg kT$ quanta (relevant for the `infall problem' of heavy observers). It was conjectured in~\\cite{plumberg} that the complementary description should describe measurements in the frame of a lab (composed of $E\\gg kT$ quanta) falling freely from infinity to the surface of the fuzzball. We can describe such a process as a `hard-impact' process.\n\n(e) Let us restate the previous point another way. In the fuzzball scenario, the exact state near the horizon is not the vacuum state of an infalling observer, or anything close to it; it is expected to have Planck-scale degrees of freedom. Thus we cannot say that we have low energy effective field theory at the horizon, and then use this low energy field theory for the purpose of describing all possible low energy observations of an infalling observer. Instead, we conjecture a complementary description for hard-impact processes involving $E\\gg kT$ quanta.\n\n(f) The complementarity conjecture is now the following (Fig.\\;\\ref{fz2p}(e),(f)). \nGiven a hard-impact process involving $E\\gg kT$ quanta, the resulting dynamics can be reproduced to a first approximation by the geometry of the black hole interior, for times of order crossing time (i.e. before the quanta reach the singularity). This description emerges from the fuzzball dynamics as follows. The $E\\gg kT$ quanta excite collective modes of the fuzzball. To a first approximation, the evolution of these modes is insensitive to the precise choice of fuzzball microstate (assuming we have taken a generic microstate). The evolution of these collective modes in this leading approximation is to be encoded in the complementary description. Thus, let the initial state of the hole have mass $M$ and be the linear combination of fuzzball states $\\sum_i C_i |F_i\\rangle$. When a quantum of energy $E\\gg kT$ impacts hard onto the fuzzball surface, the wavefunction of the fuzzball shifts to a combination $\\sum_j C'_j F'_j$ over the fuzzball states with mass $M+E$:\n\\begin{equation}\n\\sum_i C_i |F_i\\rangle~\\rightarrow ~\\sum_j C'_j F'_j\n\\label{evolve}\n\\end{equation}\nIf $E\\gg kT$, then the number of coefficients $C'_j$ is much larger than the number of $C_i$. The leading order evolution of the coefficients $C'_j$ is to be captured by the complementary description. \n\n(g) We can now see the similarities and differences with the toy example of AdS\/CFT duality discussed in Section \\ref{ads}: \\vspace{-5mm}\n\\begin{enumerate}[(i)]\n\t\\item The D1D5 brane degrees of freedom are analogous to degrees of freedom at the `boundary' of the fuzzball microstate. \\vspace{-2mm}\n \\item The D1D5 branes were taken to be in their ground state,\\footnote{We can take excited states of the D1D5 branes, but in AdS\/CFT duality we take these to be low energy excitations, and their effect in the dual gravitational description will occur near $r=0$, not near the place where the CFT is placed.} while the fuzzball structure differs microscopically from state to state. Thus we get only approximate complementarity in the black hole case, by looking at hard-impact, $E\\gg kT$ processes where the details of the fuzzball microstate become irrelevant. \\vspace{-2mm}\n \\item In the AdS\/CFT case the complementary description was possible because of the closely spaced levels of the D1D5 brane system. In the black hole case we again have a close spacing of levels, which is guaranteed by the large number $Exp[S_{bek}]$ of fuzzball microstates. \n\\end{enumerate}\n\n\\parskip=3pt\n\n\\subsection{Summary}\n\nTo summarize, we have observed the following:\n\n(a) In our toy example of AdS\/CFT duality, we have a brane description, where an incoming quantum appears to hit some degrees of freedom violently and `break up'. In a `complementary description', the incoming quantum smoothly through into an AdS region. There is no radiation from the AdS boundary itself, so there is no creation of entangled pairs at that location.\n\n(b) In traditional complementarity, one argues that there are two equivalent descriptions, a fact allowed by the limitations on communication between observers inside and outside the hole. In one description (that of the outside observer)\nincoming quanta are reflected back as Hawking radiation from a stretched horizon, while in another description (that for an infalling observer) the horizon is a smooth place. Since there is a horizon, there is a creation of entangled pairs (\\ref{two}) in a smooth slicing at that location, and there is no clear mechanism to remove this entanglement.\n\n(c) In fuzzball complementarity, there are real degrees of freedom at the horizon which arise from the fact for each black hole microstate, the compact directions pinch off in a mess of string sources and spacetime ends before we reach $r=2M$. The details of this `fuzzball' differs from microstate to microstate; there is no Hawking type creation of entangled pairs and the radiation from the fuzzball surface can be explicitly seen carry information of the microstate. Since the fuzzball surface differs from microstate to microstate, complementarity can only be obtained in an approximation where the effect of these differences is small. The conjecture is that when $E\\gg kT$ quanta impact the fuzzball, they excite collective modes that are relatively insensitive to the precise choice of microstate; the evolution of these modes (\\ref{evolve})\n can be approximated by evolution in a spacetime that mimics the black hole interior. \n \n\\section{Limits on measurements made outside the horizon}\\label{secfour} \n\n\n\nIn this section we address the following question. If we measure the radiation outside a black hole, then can we tell the difference between a traditional black hole and an object that radiates unitarily at the same temperature $T$ from a surface just outside $2M$?\n\nThe measurements we are interested in are close to the horizon ($|r-2M|\\ll 2M$), so we can consider the near horizon geometry depicted in Fig.\\;\\ref{fz6}. In Fig.\\;\\ref{fz6}(a) we have the traditional black hole, which has vacuum around the horizon, so the near horizon region looks like Minkowski space when seen in Kruskal coordinates. In Fig.\\;\\ref{fz6}(b) we have a warm surface placed just outside $r=2M$ (indicated by the jagged line), and this surface is assumed to radiate quanta at the temperature $T$ of the black hole.\n\n\n \\begin{figure}[htbt]\n\\begin{center}\n\\includegraphics[scale=.75]{fz6.eps}\n\\end{center}\n\\caption{(a) An inertial detector in Minkowski space, making a measurement using only the indicated part of its trajectory. Vacuum fluctuations excite the detector. (b) A similar detection, but for case of a warm body radiating into the right Rindler wedge. The wavelength of quanta is of the same order as the distance from the horizon. (c) Radiation from a `hot' body, where the wavelength is much shorter than the distance from the horizon.}\n\\label{fz6} \n\\end{figure} \n\nAt first it may appear that the case of Fig.\\;\\ref{fz6}(b) has real radiation that can `burn', while there is no real radiation in Fig.\\;\\ref{fz6}(a). We {\\it can} see quanta in Minkowski spacetime by taking a detector that accelerates. But our interest in in freely falling observers, which are indicated by the straight line trajectory in Fig.\\;\\ref{fz6}(a). One may expect that a detector moving in straight line in Minkowski space should not detect any quanta. But the situation we have is a little special. We are asking if we can distinguish the physical situations of Fig.\\;\\ref{fz6}(a) and Fig.\\;\\ref{fz6}(b) by observations {\\it outside the horizon}. Thus a detector trying to make a measurement would have to do this task by using only a section of its trajectory like that indicated in Fig.\\;\\ref{fz6}(a).\n\nBut if we place conditions on how long a detector has to make a measurement, then we run into the problem that we pick up vacuum fluctuations. We discuss the scales involved in the problem in Section \\ref{sectime}. Suppose we are considering radiation at the Hawking temperature $T$. The wavelength of these quanta at a distance $d$ from the horizon is $d\\sim \\lambda$. The infalling detector trying to measure such quanta has a limited time to make this measurement, and we argue that this available time is less than the time required to make the desired measurement.\n\nIn Section \\ref{rindler} we note that the above estimates reflect a general fact: for generic states of the radiating body in Fig.\\;\\ref{fz6}(b), observations of radiation do not appear statistically different from the vacuum fluctuations picked up by the detector of Fig.\\;\\ref{fz6}(a). The arguments we give are very basic to the theory of particle detection, and are implicit in many treatments of Rindler space (see e.g. the review \\cite{Crispino:2007eb} and references within).\\footnote{We also thank Bill Unruh for an earlier conversation about detectors in Minkowski spacetime.}\n\n\n\n\\subsection{Time needed for detector response}\\label{sectime}\n\nLet us examine what kind of quanta a detector can actually pick up in a measurement process. In Appendix \\ref{detect} we show that if we wish to measure a quantum of wavelength $\\lambda$, the we need a proper time $\\gtrsim\\lambda$ to elapse along the detector trajectory:\n\\begin{equation}\n\\Delta \\tau_{needed}\\gtrsim \\lambda\n\\label{ten}\n\\end{equation}\n In Fig.\\;\\ref{fz6} we note two different possibilities for the location of the quantum of wavelength $\\lambda$. In Fig.\\;\\ref{fz6}(a),(b) the quantum is at a distance $d\\sim \\lambda$ from the black hole surface. In Fig.\\;\\ref{fz6}(c) the quantum is at a distance $d\\gg \\lambda$ from the black hole surface. In Appendix \\ref{wavelength} we show that the Hawking quanta radiated from the black hole surface are of the former type; the typical wavelength found at a distance $\\lambda$ from the horizon is $\\sim\\lambda$ itself:\n \\begin{equation}\n \\lambda \\sim d\n \\label{el}\n \\end{equation}\n We now begin the see the source of difficulty in catching high energy Hawking quanta: we are already very close to the horizon when we encounter them, and then we may have too little time left to interact with them. Before proceeding, there is one effect that we must take into account. Because the detector is infalling, it sees the outgoing quantum as being Lorentz contracted; thus the wavelength of the quantum appears shorter than the distance $d$ measured along a $t=const$ slice. We take a local Lorentz frame oriented along the Schwarzschild $t, r$ directions, and let the proper velocity of the detector in this frame be\n \\begin{equation}\nU^{\\hat t}=\\cosh\\alpha, ~~~U^{\\hat r}=-\\sinh\\alpha\n\\end{equation}\nThen, as shown in Appendix \\ref{wavelength}, the effective wavelength of the Hawking quanta encountered by the infalling detector is\n\\begin{equation}\n\\lambda_{eff}\\sim d e^{-\\alpha}\n\\end{equation}\nNow we consider the proper time available to an infalling detector to measure the Hawking quantum; this detection must be made between the time the detector is at a distance $\\sim d$ from the horizon and the time it falls through the horizon. In Appendix \\ref{time} we show that for a detector falling in from far outside the horizon, this proper time is\n \\begin{equation}\n \\Delta \\tau_{available}< de^{-\\alpha}\n \\label{tw}\n \\end{equation}\n Putting together (\\ref{ten}), (\\ref{el}) and (\\ref{tw}) we get\n \\begin{equation}\n \\Delta \\tau_{available}< \\Delta \\tau_{needed}\n \\end{equation}\n so we conclude that an infalling detector cannot reliably pick up Hawking quanta being radiated from a black hole surface. We now turn to comparing the behavior of detectors in the situations of Fig.\\;\\ref{fz6}(a) and Fig.\\;\\ref{fz6}(b).\n\n\n\\subsection{Detectors in Rindler space and detectors near warm bodies} \\label{rindler}\n\n\nLet us consider the following question. We look at the situation of Fig.\\;\\ref{fz6}(a), where we have an inertial detector in empty Minkowski space, but the detection is required to be made before the detector crosses the Rindler horizon. We can therefore capture our physics by using Rindler coordinates covering the right Rindler wedge \n\\begin{equation}\nt_M=r_R \\sinh t_R, ~~~x_M=r_R \\cosh t_R\n\\end{equation}\nwhere $t_M, x_M$ are the Minkowski coordinates and $r_R, t_R$ are the Rindler coordinates. Now consider the behavior of the detector as seen in these Rindler coordinates. The space near the horizon looks very hot; it is full of Rindler quanta. Would these quanta `burn' the infalling detector?\n\n\nAt first one may think that an inertial detector in Minkowski space should see nothing. But we have already noted above that the limits placed on the measuring time causes the detector to be excited by vacuum fluctuations. We will now see that such an excitation is of the same kind as that expected in Fig.\\;\\ref{fz6}(b), where we have `real' quanta being radiated at the Rindler temperature by a surface placed just outside the Rindler horizon. \n\nLet the quanta being detected correspond to a scalar field $\\phi$, which is taken to be in the Minkowski vacuum state $|0\\rangle_M$. Since our observations are confined to the right Rindler wedge, we can use the expansion of the field operator in Rindler modes\n\\begin{equation}\n\\hat \\phi=\\sum_\\omega [f_\\omega(r_R)e^{-i\\omega t_R}\\, \\hat a_\\omega+\nf^*_\\omega(r_R)e^{i\\omega t_R}\\, \\hat a_\\omega^\\dagger]\n\\end{equation}\nLet the detector be a 2-level system. We will take it to start in the unexcited state $|i\\rangle$, and interactions with $\\phi$ can move it to the state $|f\\rangle$. The interaction is described by $\\int d\\tau \\, \\hat H_{int}(\\tau)$ where (see e.g.~\\cite{Crispino:2007eb})\n\\begin{equation}\n\\hat H_{int}(\\tau)=q \\, h(\\tau)\\, \\hat O(\\tau)\\, \\hat \\phi \\bigl( t_R(\\tau), r_R(\\tau)\\bigr)\n\\label{qint} \n\\end{equation}\nHere $\\hat O$ is an operator made out of the detector variables, $q$ is a coupling constant and $0\\le h(\\tau) \\le 1$ is a `switching function' that allows us to switch on and switch off the interaction of the detector with the scalar field $\\phi$. \n\nThe Minkowski vacuum $|0\\rangle_M$ can be written in terms of Rindler states of the left (L) and right (R) wedges\n\\begin{equation}\n|0\\rangle_M=C\\sum_k e^{-{E_k\\over 2}}|E_k\\rangle_L|E_k\\rangle_R, ~~~~~~~C=\\Big (\\sum_k e^{-E_k}\\Big )^{-{1\\over 2}}\n\\label{split}\n\\end{equation}\nNow suppose the interaction is switched on for a brief period as indicated in Fig.\\;\\ref{fz6}(a). Before the interaction is switched on, the state of the overall system is\n\\begin{equation}\n|\\Psi\\rangle_i=|i\\rangle \\otimes C\\sum_k e^{-{E_k\\over 2}}|E_k\\rangle_L|E_k\\rangle_R\n\\label{qstate}\n\\end{equation}\nUsing first order perturbation theory in the strength of the interaction $q$, we ask for the amplitude for the transition\n\\begin{equation}\n|i\\rangle\\otimes |E_k\\rangle_L|E_k\\rangle_R~\\rightarrow~|f\\rangle\\otimes |E_k\\rangle_L|E_{k'}\\rangle_R\n\\end{equation}\nThis amplitude is\n\\begin{equation}\n{\\cal A}_{kk'}=-i \\int_{-\\infty}^\\infty d\\tau\\, h(\\tau)\\, \\langle f | \\hat O |i\\rangle {}_R\\langle E_{k'} |\\hat\\phi \\bigl( t_R(\\tau), r_R(\\tau)\\bigr )|E_k\\rangle_R\n\\end{equation}\nThe quantity ${}_R\\langle E_{k'} |\\hat\\phi \\bigl(t_R(\\tau), r_R(\\tau)\\bigr)|E_k\\rangle_R$ can be easily computed by writing $|E_k\\rangle_R$ in terms of the occupation numbers for different Rindler modes and using the field expansion (\\ref{split}). Note that $h(\\tau)$ in nonzero only over the part of the detector trajectory indicated in Fig.\\;\\ref{fz6}(a). \n\nThe probability for the detector to get excited $|i\\rangle\\rightarrow |f\\rangle$ is then\n\\begin{equation}\nP_{Minkowski}=|C|^2\\sum_k e^{-E_k}\\sum_{k'}|{\\cal A}_{kk'}|^2\n\\end{equation}\nwhere the subscript on $P$ indicates that this computation was performed for the Minkowski vacuum situation of Fig.\\;\\ref{fz6}(a). Here the factor $e^{-E_k}$ reflects the fact that the probability of finding the state $|E_k\\rangle_R$ in the state (\\ref{split}) is\n\\begin{equation}\np_{E_k}=|C|^2e^{-E_k}\n\\label{wone}\n\\end{equation}\n\nNow consider a state that describes a warm body at the same temperature as Rindler space, as shown in Fig.\\;\\ref{fz6}(b). In terms of Rindler eigenstates, this state has a form\n\\begin{equation}\n|\\Psi\\rangle=\\sum_k C_k |E_k\\rangle\n\\label{stateq}\n\\end{equation}\nDifferent microstates of the warm body have different coefficients $C_k$, but the ensemble average over possible microstates will have\n\\begin{equation}\n\\langle |C_k|^2\\rangle=|C|^2 e^{-E_k}\n\\label{approximation}\n\\end{equation}\nin agreement with (\\ref{wone}).\nWe again consider the infalling detector with the same interaction (\\ref{qint}). With the state (\\ref{stateq}) the probability for the detector to get excited is\n\\begin{equation}\nP_{microstate}=\\sum_k \\sum_{k'}|C_k|^2| {\\cal A}_{kk'}|^2\n\\end{equation}\nUsing (\\ref{approximation}) we find that the the ensemble average of the excitation probability for radiation from `warm bodies' is the same as the excitation probability in the Minkowski vacuum when the detection range is confined to be outside the horizon\n\\begin{equation}\n\\langle P_{microstate}\\rangle=P_{Minkowski}\n\\end{equation}\nIn particular, if the infalling body is macroscopic so that it `measures' a large number of quanta, then the effect of radiation in any one microstate will be approximately the same as the effect of vacuum fluctuations when \nwe restrict to the part of the observer worldline that is outside the horizon:\n\\begin{equation}\n P_{microstate}\\approx P_{Minkowski}\n \\label{eqburn}\n\\end{equation}\nA similar effect is also obtained when we consider a detector that has fallen in from near infinity. Quanta at infinity with wavelength $\\lambda$ are wavepackets that have a transverse size $\\Delta \\gtrsim\\lambda$; this is necessary since otherwise the uncertainty principle will give the quantum more transverse momentum $\\sim 1\/\\Delta$ than radial momentum, and the quantum will not really be headed towards the black hole. As the quantum comes closer to the horizon, the wavelength in the radial directions becomes small by blue-shifting, while the transverse size $\\Delta$ remains unaffected. Thus all quanta falling in from infinity are `flattened' near the horizon. The largeness of $\\Delta$ compared to the radial wavelengths of Hawking quanta near the horizon means that several Hawking quanta at different angular positions along the horizon can interact with the infalling quantum. Thus we are again led to compute statistical averages, getting a result like (\\ref{eqburn}).\n\n\n\\subsection{Summary}\n\n\nTo summarize, we have compared measurements made by an infalling detector in the case of Minkowski space (Fig.\\;\\ref{fz6}(a)) and in the case of a warm body at the same temperature (Fig.\\;\\ref{fz6}(b)). These two cases are equivalent to the traditional black hole and to a black object with a radiating surface just outside the horizon. While one might at first think that the detector would measure very different things in the two cases, we find that the detector excitation probabilities are actually {\\it similar}. The underlying reason for the similarity is the fact that we need the detection to be completed before the detector reaches the horizon, and this causes vacuum fluctuation excitations in the Minkowski space case that resemble the `real' quanta picked up in the warm body case.\n\nWhile fuzzballs radiate at exactly the rate expected for Hawking emission, one may envisage a theory other than string theory where the quanta are emitted with energy \n\\begin{equation}\nE\\gg kT\n\\end{equation}\nwith $T$ the Hawking temperature. In other words, we may give up the thermal spectrum of emission, and have the situation pictured in Fig.\\;\\ref{fz6}(c) where the emitted quantum has wavelength $\\lambda\\ll d$ at a distance $d$ from the horizon. In this case it {\\it is} possible to make a reliable measurement of the quantum, since ample time is available before the detector reaches the black hole surface. But in this case the emitted radiation will not carry away all the information of the black hole. This follows because the entropy of Hawking radiation (at temperature $T$) is just $\\sim 1.3$ times the Bekenstein entropy $S_{bek}$ \\cite{zurek}. Taking $E\\gg kT$ will give us $N\\ll S_{bek}$ quanta to carry out the $S_{bek}$ bits in the black hole, and this is not possible since each quantum carries $\\sim 1$ bit of information.\n\n\n\n\n\\section{The AMPS argument}\\label{secfive}\n\nIn this section we examine the main argument of Almheiri, Marolf, Polchinski and Sully (AMPS). We will note that the measurement they envisage cannot be performed reliably in the given situation, and further, that no conclusions about fuzzball complementarity can be drawn from such a situation. \n\n\nIn outline, the AMPS argument goes as follows: \n\\begin{enumerate}[(i)]\n\t\\item If Hawking evaporation is unitary, then the state near the horizon is not the vacuum in an infalling observer's frame, but involves high-energy excitations.\n\t\\vspace{-3mm}\n\t\\item If there are high-energy excitations near the horizon, then an infalling observer will measure physical high energy quanta emerging from the black hole, and get burnt\n\t\\vspace{-3mm}\n\t\\item If the observer gets burnt, then we cannot have any complementary description where he falls through without noticing anything at the horizon.\n\\end{enumerate}\n \n\n\n\nWe examine each of these steps in turn.\n\n\\bigskip\n\n\\noindent{ {\\bf (i) The need for large corrections at the horizon}}\n\n In \\cite{cern} it was shown, using strong subadditivity, that semiclassical physics at the horizon cannot lead to the behavior of entanglement entropy $S_{ent}$ that is expected for normal bodies \\cite{page}. The behavior for $S_{int}$ is depicted in Fig.\\;\\ref{fz9}. AMPS try to summarize a version of this argument, but miss a crucial step. We would like to clarify this point since it is important, before continuing with the AMPS argument.\n\n \\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.75]{fz9.eps}\n\\end{center}\n\\caption{(a) The growth of entanglement entropy for the traditional black hole in the leading order Hawking computation (solid line), and with small corrections allowed (dashed line). (b) The entanglement entropy expected for a normal body \\cite{page}; $S_{ent}$ must return to zero when the body radiates away completely.}\n\\label{fz9} \n\\end{figure}\n\nConsider the Hawking pair (\\ref{two}) produced in the leading order Hawking process; let the outer and inner members of this pair be called $B, C$ respectively. AMPS consider this leading order process, a fact which is implicit in their assumption that\n$S_{BC}=0$;\ni.e., the produced pair is not entangled with anything else (Fig.\\;\\ref{fz7}(a)). They then use strong subadditivity to argue that $S_{ent}$ cannot return to zero like it should for normal bodies. But this situation does not need the powerful relation of strong subadditivity. In the leading order Hawking process the relation (\\ref{two}) tells us that the state of the created pairs is a tensor product of individual pairs (eq. (17) of \\cite{cern}), and so $S_{ent}=N\\ln 2$ after $N$ pairs have been produced. This gives the linearly increasing graph of Fig.\\;\\ref{fz9}(a), and we do not need strong subadditivity to prove that $S_{ent}$ does not return to zero.\n\n\\begin{figure}[htbt]\n\\begin{center}\n\\includegraphics[scale=.75]{fz7.eps}\n\\end{center}\n\\caption{(a) Creation of entangled pairs in the leading order Hawking computation. (b) Small corrections; if these could reproduce the graph Fig.\\;\\ref{fz9}(b) then we would not need a firewall. (c) A firewall that one can pass through; now one can detect the quanta near the horizon. (d) In a fuzzball spacetime ends before the horizon. Hawking radiation is an integral part of the dynamics of the fuzzball.}\n\\label{fz7} \n\\end{figure}\n\n\n\n\nThe important issue, as discussed in Section \\ref{basics}, is whether {\\it subleading} corrections to the leading order Hawking process can make $S_{ent}$ reproduce the behavior of a normal body.\\footnote{The possibility that this might happen was raised in \\cite{eternal}. Hawking's reversal of his belief that information is lost was also implicitly based on the assumption that exponentially small corrections to the leading order process would produce an unentangled state \\cite{hawkingreverse}.} If small corrections could do the job, then we {\\it cannot} conclude that there would be a firewall; we depict this in Fig.\\;\\ref{fz7}(b). To analyze small corrections we have to start from \n$S_{BC}=\\epsilon$\nand then we do need to use strong subadditivity\n to establish the required inequality (\\ref{three}).\n\nTo summarize, a smooth vacuum at the horizon leads to the creation of Hawking pairs (\\ref{two}), and with (\\ref{three}) we see that we cannot get information out in Hawking radiation. Thus if we do wish to have the radiation be unitary, then we must alter the structure of the modes involved in the Hawking process. One may try to restrict the required change to just these modes; this requires us to invoke as yet undiscovered nonlocal effects \\cite{giddings}. If we choose to not do this, then we have an alteration of the physics for {\\it all modes} at the horizon. AMPS take the latter route\\footnote{They consider the possibility of nonlocal effects in a separate discussion later.}, and then consider an experiment: they let an infalling observer fall into such a hole and argue he will get `burnt' by the altered structure at the horizon. Further, they argue that getting burnt in this way precludes the possibility of complementarity. We now examine each of these issues in turn.\n\n\n\\bigskip\n\n\\noindent{ {\\bf (ii) Getting burnt by Hawking quanta}}\n\n\nHere AMPS wish to distinguish the traditional black hole from a body that radiates at the Hawking temperature from a surface just outside the horizon. They argue that in the case of the radiating body an infalling observer will observe high energy quanta, while there will be no such quanta observed for the traditional black hole. Let us see what questions we can ask:\n\n\\bigskip\n\n(a) The temperature of the radiation is $T\\sim m_{p}$ at a distance $l_{p}$ from \nthe horizon. If we wish to avoid Planck-scale physics, the we can try to focus on the radiation a distance $d$ from the horizon with\n\\begin{equation}\nl_p\\ll d\\ll 2M\n\\end{equation}\nFor concreteness, let us think of $d\\sim 10^6 l_p$, where we expect the temperature to be high enough to `burn' but the physics is still not the unknown physics at Planck scale. For instance, in Fig.\\;1 of \\cite{amps}, one can ask if the infalling detector gets burnt during the part of its trajectory where it crosses the shaded region representing the outgoing Hawking quantum. To restrict our question to the required region, we consider the effect of the radiation on a detector which is switched off when we get to a distance closer than $\\sim 10^6 l_p$ from the horizon. But in such a situation we have noted in Section \\ref{rindler} that the excitation probability of the detector in the case of the traditional horizon and in the case of the firewall are statistically the {\\it same} (eq.(\\ref{eqburn})). Thus we cannot say that we will get burnt in one case and not the other. \n\n\n(b) The above equality of excitation probabilities resulted from the fact that we were allowed a limited time to make the detection; thus vacuum fluctuations excited the detector even for the traditional hole. We could allow ourselves a longer time for detection if we assumed that we could pass through the firewall to the other side of the horizon, and again find ourselves in a region of low temperature. Such a possibility is pictured in Fig.\\;\\ref{fz7}(b), and in this case we would excite the detector for the firewall but not for the traditional hole. But to have this `other side' to the firewall we need to pass through a `wall' of of Planck-scale physics. Since the strength of gravitational interactions increases with energy, we can expect that the largest interactions would be when the detector is crossing the region of Planck temperature, and so we cannot focus on the issue of detection of quanta with wavelength $\\sim 10^6 l_p$ without asking if the theory allows us to pass through the Planck temperature region. (In particular, it is hard to imagine a physical model reproducing Fig.\\;\\ref{fz7}(c) which has smooth space on both sides of a Planck energy region.)\nAs we note below, the fuzzball microstates of string theory do {\\it not} allow us to pass through the Planck temperature region; spacetime ends there in a stringy mess. Further, as we will note in part (iii) below, interaction with the Planck-scale degrees of freedom is not what precludes the kind of complementarity that we find with fuzzballs; instead, it is this interaction which transfers information to the collective modes of the fuzzball and leads to a complementary description.\n\n(c) In Fig.\\;\\ref{fz7}(c) we depict the situation with fuzzballs. The incoming quanta cannot pass through the fuzzball surface, and so they transfer their energy to excitations of the fuzzball. The fuzzball details and the radiation it emits are parts of the same structure: the radiation is the small time dependent part of the gravitational solution away from $r\\approx 2M$. The response of the Planck-scale degrees of freedom in encoded in the response (\\ref{evolve}) of the fuzzball, and this effect is expected to dominate over interactions with the radiation tail.\n\n\nOne thing is important to note about this interaction. Let the infalling observer be made of degrees of freedom that evolve slower than the Planck scale. Then the observer does not evolve significantly between the time that its coupling to the radiation becomes significant and the time it reaches the fuzzball boundary. Thus it is not clear what `burning' means in this context. The correct question to focus on is not the evolution of the infalling observer, but rather the evolution (\\ref{evolve}) of the fuzzball degrees of freedom that the observer impacts.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bigskip\n\n\\noindent{ {\\bf (iii) The possibility of complementarity}}\n\nFinally, let us address the issue of complementarity. The AMPS paper claims that their argument applies to the proposal of fuzzball complementarity. We can paraphrase this as claiming that, even for the simplest of hard-impact processes involving high-energy quanta, if an infalling quantum `burns up' at the Planck-temperature surface of the fuzzball, then there cannot be any other approximate complementary description involving free infall. This desire to avoid any interaction is suggested by the traditional proposal of complementarity. But as we have seen in Section \\ref{trad}, there are difficulties with traditional complementarity, and this is not the kind of complementarity that we have proposed. \n\nConsider first our toy example of AdS\/CFT duality. A graviton falling onto a D1D5 brane bound state {\\it did } interact strongly on reaching these branes and broke up into a pair of excitations. Yet there was a complementary description where it passed smoothly into an AdS region. Similarly, $E\\gg kT$ gravitons falling onto the fuzzball surface {\\it do} interact strongly with the surface and excite the collective dynamics of the fuzzball degrees of freedom; it is this collective dynamics (\\ref{evolve}) which will have a dual representation where to a first approximation the graviton will appear to fall through a horizon.\n\nThe moral we draw is that it is incorrect to conclude that complementarity would be impossible if an object encountered strong interactions near the horizon. The situation is quite the opposite: we need {\\it strong} interactions near the horizon to absorb the energy of the infalling quantum into the black hole's degrees of freedom to get `fuzzball complementarity'. If the absorption leads to an approximately faithful map of the infalling quantum's Hilbert space into a subspace of the black hole degrees of freedom, then we have the possibility of a complementary description of the infall. \n\n\n \n\n\\section{Discussion}\\label{secsix}\n\nIn this paper we have done two things: we summarized how complementarity is conjectured to work with fuzzballs, and we noted how the AMPS argument fails to address the underlying physics in this conjecture. In the discussion below we will put these two parts together, to see more directly where the AMPS argument goes wrong. In short, we will see that complementarity is a story of {\\it two} descriptions of the physics, while AMPS try to have elements of both descriptions in the {\\it same} setting. \n\nFor the discussion below, it is helpful to summarize one version of the AMPS argument as follows:\n\n\\bigskip\n\n(1) Suppose the infalling observer sees nothing around $r=2M$ in some\ndescription.\n\n(2) Then in this description we have a smooth patch of spacetime around\nthe horizon.\n \n(3) Evolution of vacuum modes in this smooth patch will lead to an entangled Hawking pair, and this\nwill lead to the information problem.\n\n\\bigskip\n\nThe problem with this argument is that the description in which we have (1) (i.e. smooth spacetime) is valid only as an approximation for describing the physics of hard-impact $E\\gg kT$ infalling quanta for short times (order $\\sim M$). One cannot use the effective smooth spacetime used in this approximation to describe the entanglement of Hawking pairs over the much longer Hawking evaporation time (order $\\sim M^2$); in particular one cannot relate the effective description of (1) to the information problem which needs us to talk about the details of $\\sim (M\/m_p)^2$ Hawking pairs.\n\n \n Let us now see in more detail how things actually work:\n \n \\bigskip\n \n (a) The microstates of the black hole are fuzzballs, which means that the gravitational solution ends just outside $r=2M$ when the compact directions pinch off; the structure at this location is a quantum mess of KK monopoles, strings, fluxes, etc. (i.e. the set of allowed sources in string theory). \n \n (b) $E\\sim kT$ radiation is emitted from these sources, carrying the information of the microstate. A simple model to keep in mind is the computation of \\cite{radiation}, where ergoregions near the fuzzball surface emit quanta by ergoregion emission. From this computation we learn that there is no sharp separation between the radiation and the fuzzball: the gravitational field in the ergoregion is unstable and radiates gravitons. If we follow these emitted gravitons back to their source, then we find more and more nonlinear gravitational physics, culminating in the `cap' where the fuzzball solution ends in KK monopoles etc. Thus whenever we ask if we interact with emitted quanta, we might as well go all the way and ask if we interact with the fully nonlinear `cap'. \n \n (c) We recalled the toy example of AdS\/CFT, which has similarities and differences with the black hole case. For now we look at the similarities. Suppose we have a bound state of $N$ D1 and $N$ D5 branes. The infalling quanta of Fig.\\;\\ref{fz2p}(a) impacts this collection of branes and transfers its energy into excitations of the branes. Similarly, a quantum falling onto the fuzzball transfers its energy to the string theoretic sources (KK monopoles etc) on the fuzzball surface. In (b) we had noted that the radiation from the fuzzball was just the tail end of the full nonlinear KK monopole `cap', so interactions with radiation near the horizon are included in this description. \n \n \n \n \n\n(d) But in the AdS\/CFT case, there is a second description, that of Fig.\\;\\ref{fz2p}(b), where the infalling quantum sails smoothly through into an AdS region. In this description we do not see the D1 and D5 branes as something that can be `hit'. In the analogous case of fuzzballs, an infalling object does not see the nonlinear KK monopoles etc. near $r=2M$, but instead sails through smoothly. In particular, it does not see the `tail end' of the nonlinear structure -- the radiation from the fuzzball -- as high energy quanta that can be `hit'. \n\n(e) To understand how it is possible for the infalling observer to sail through smoothly, consider first the AdS\/CFT example. If the D1,D5 branes were `inert'; i.e., they did not shift their internal state when the infalling object approached, then there would not be any description where the object `sailed through'. But in fact the D1D5 brane bound state has a {\\it vast} space of internal excitations, and this changes the situation: the approach of the infalling object creates excitations in this vast space of possibilities, and the dynamics of these excitations is the dominant physics of the combined branes+object system. It is this dynamics that is described by the smooth infall into AdS space.\\footnote{One often thinks of AdS\/CFT duality as saying that the gravity variables in AdS can be re-expressed in terms of gauge theory variables on the boundary of AdS. But the origin of this duality is in the context of absorption by D-branes and black holes, and in that context the natural process to consider is the infall of quanta from infinity onto the branes. The largeness of $N$, the number of branes, leads to the excitation spectrum of the branes as being very dense, and the effective AdS description emerges.}\n\n\n\n(f) The fuzzball has a similarly large phase space of deformations, since the number of fuzzball solutions is $Exp[S_{bek}]$. Now we see the basic element missing from the analysis of AMPS. They ask for the dynamics of the infalling object (what it measures etc.) but they ignore the fact that the much more important dynamics is the change of the state of the {\\it fuzzball}: $\\sum_i C_i |F_i\\rangle\\rightarrow \\sum_j C'_j F'_j$. This latter dynamics is so dominant that one must consider the infalling object and fuzzball as one unified system and then analyze the dynamics. When infalling quanta with energy $E\\gg kT$ fall freely onto the fuzzball from far away, the conjecture is that the resulting dynamics has an approximate description valid for short times (order $\\sim M$) that mimics infall through a smooth horizon \\cite{plumpre,plumberg,otherfcrefs}; this is analogous to how in the AdS\/CFT case the object falls through smoothly into an AdS space. \n\n(g) The approximate nature of the `smooth infall' description is important. Since this description is valid only over a time of order $\\sim M$, we cannot use this patch of smooth space to argue that entangled Hawking pairs will be created and will escape to large distances from the black hole. There is hardly time to create one pair in such a region. We cannot join together many such patches to argue that we have created many entangled pairs, since the description is only valid for short times and does not accurately track $E\\sim kT$ physics. The existence of many entangled pairs would have led to the information problem as discussed in Section \\ref{basics}(b),(c); this problem does not arise here since we cannot study the creation of a large set of such pairs in our approximate `smooth infall' description.\n\n\\bigskip\n\n \nTo summarize, the error in the AMPS argument can be seen by considering the infall of an observer into a stack of branes. These branes are in a particular internal state, which can be probed by patient low energy scattering experiments from infinity. But the infalling observer reports none of this structure as he approaches the branes; he feels as if it is falling through empty AdS space. This `magical disappearance' of the branes can be traced to the fact that the branes have a vast set of internal states, and the dominant effect of the approach of the observer is to alter the internal state of the branes. Thus the dynamics of the \nbranes+observer system is governed by the evolution of these newly created excitations, and not by the observer scattering off a fixed state of the branes. \n\nWe can now see the fundamental role that the fuzzball construction plays in resolving the puzzles with black holes. If we have the traditional Penrose diagram of the hole, with vacuum at the horizon, then we get the creation of entangled pairs, and we cannot evade the Hawking information loss problem \\cite{hawking,cern}. But in string theory we find that there is very nontrivial structure at the horizon: the KK monopoles etc at the fuzzball surface carry `real' degrees of freedom that radiate unitarily\nlike a normal body. This resolves the information paradox. But we can ask a different question: what happens when we consider hard impacts of high energy ($E\\gg kT$) quanta on the fuzzball surface? In this case the physics is analogous to what we find in AdS\/CFT: the KK monopole and other string theoretic degrees of freedom on the fuzzball surface act like the branes in the D1D5 system. The infalling observer reports nothing special as it approaches these objects, since the dominant dynamics is that of {\\it exciting} the fuzzball degrees of freedom, not the response of the observer. AMPS implicitly assume that they are falling towards a radiating surface that is {\\it inert} to such excitations, and thus miss the physics of free infall which is common to the fuzzball and the AdS\/CFT cases.\n\nIn the context of the argument (1)-(3) listed at the start of this section, we see that the implication (1) $\\rightarrow$ (2) is misleading. It is not that we don't have structure at the location of the branes; rather, the infalling observer does not report such structure. The patch of smooth spacetime in (2) is an effective description of the $\\sum_i C_i |F_i\\rangle\\rightarrow \\sum_j C'_j F'_j$ dynamics which describes the excitations of the impacted fuzzballs; it is not the actual gravitational solution at the horizon. The implication (2) $\\rightarrow$ (3) does not work since this effective description cannot be applied to a region larger than $\\sim M$ which would be needed to create a large number of entangled pair. In general there are {\\it two} descriptions involved: (i) the actual microscopic fuzzball which carries all information of the state and radiates unitarily, and (ii) the approximate short time description of collective modes, that mimics free infall. AMPS do not differentiate carefully these two descriptions, and that leads them to claim an apparent contradiction with fuzzball complementarity. \n\nIn conclusion, the AMPS argument does not apply to the process by which complementarity is conjectured to arise in the fuzzball picture. But it is a very interesting argument to consider, since it brings out clearly the various important physical principles involved in the quantum dynamics of black holes. \n\n\\section*{Acknowledgements}\n\nThis work was supported in part by DOE grant DE-FG02-91ER-40690. We thank the authors of \\cite{amps} as well as Iosif Bena, Borun Chowdhury, Stefano Giusto, Oleg Lunin, Lenny Susskind and Nick Warner for discussions. In particular we are grateful to Don Marolf for patiently explaining to us the nature of the AMPS argument.\n\n\n\\begin{appendix}\n\n\\section[Appendices]{Timescale for detection} \\label{detect}\n\nHere we note that a detector needs a proper time $\\Delta \\tau\\gtrsim \\lambda$ to detect a quantum of wavelength $\\lambda$. Since this argument is well known, we will describe it for the simple case of a detector at rest in the Minkowski vacuum; the extension to other situations is straightforward.\n\nWe assume for simplicity that the metric is time independent in our choice of coordinates. The field operator can be expanded as\n\\begin{equation}\n\\hat\\Phi=\\sum_k [{1\\over \\sqrt{2\\omega_k}} e^{i(kx-\\omega_k t)}\\hat a_k\n+{1\\over \\sqrt{2\\omega_k}}e^{-i(kx-\\omega_k t)}\\hat a_k^\\dagger], ~~~~[\\hat a_k, \\hat a_{k'}^\\dagger ] =\\delta _{k, k'}\n\\end{equation}\n We take the detector to be a harmonic oscillator\n\\begin{equation}\n\\hat\\Psi={1\\over \\sqrt{2\\Omega}} e^{-i\\Omega \\tau}\\hat A + {1\\over \\sqrt{2\\Omega}} e^{i\\Omega \\tau}\\hat A^\\dagger, ~~~[\\hat A, \\hat A^\\dagger]=1\n\\end{equation}\nThe interaction along the worldline is given by $\\int d\\tau \\, \\hat H_{int} (\\tau)$ where\n\\begin{equation}\n\\hat H_{int}(\\tau)=q \\, h(\\tau) \\, \\hat\\Phi\\bigl (t(\\tau), x(\\tau)\\bigr )\\hat \\Psi(\\tau)\n\\end{equation}\nHere $q$ is a coupling constant and $0\\le h(\\tau)\\le 1$ is a function that allows us to switch on and switch off the detector.\n\nWe start at $\\tau\\rightarrow -\\infty$ with the detector in the ground state: $\\hat A|0\\rangle_A=0$. Let us also take the spacetime to be empty of quanta: $\\hat a_k |0\\rangle_a=0$. We take first order perturbation theory in $q$. The amplitude to reach the state $|1\\rangle_A|1\\rangle_{k} \\equiv \\hat A^\\dagger \\hat a_k^\\dagger |0\\rangle_A|0\\rangle_a$ is\n\\begin{equation}\n{\\cal A}=-iq\\int_{-\\infty}^\\infty d\\tau {1\\over \\sqrt{2\\omega_k}}{1\\over \\sqrt{2\\Omega}}h(\\tau)e^{i\\Omega \\tau}e^{-ikx(\\tau)+i\\omega_k t(\\tau)}\n\\end{equation}\nWe take\n\\begin{equation}\nh(\\tau)=e^{- ({\\tau\\over \\Delta \\tau})^2}\n\\end{equation}\nwhich corresponds to making a measurement over an interval $\\sim \\Delta\\tau$. \nWe also let the detector trajectory to describe a detector at rest at $x=0$, which gives $x(\\tau)=0, t(\\tau)=\\tau$ for all $\\tau$. This gives \n\\begin{equation}\n{\\cal A}=-i q\\int _{-\\infty}^\\infty d\\tau {1\\over \\sqrt{2\\omega_k}}{1\\over \\sqrt{2\\Omega}}e^{- ({\\tau\\over \\Delta \\tau})^2}e^{i(\\Omega+\\omega_k)\\tau}=-iq{1\\over \\sqrt{2\\omega_k}}{1\\over \\sqrt{2\\Omega}}\\Delta\\tau\\sqrt{\\pi}e^{-{1\\over 4}(\\Delta \\tau)^2(\\Omega+\\omega_k)^2}\n\\end{equation}\nKeeping the detector on for all time is equivalent to taking $\\Delta\\tau\\rightarrow \\infty$, in which case we get ${\\cal A}=0$. So the detector does not get excited, which is expected since we started with empty Minkowski space. \n\nBut now consider a situation where the detector is switched on and off in a comparatively short interval, as would need to be the case if one was trying to detect a Hawking quantum by an infalling detector before the detector hit the black hole surface. For detection times shorter than the wavelengths we want to measure\n\\begin{equation}\n\\Delta\\tau\\lesssim {1\\over (\\Omega+\\omega_k)}\n\\end{equation}\nwe get\n\\begin{equation}\n{\\cal A}\\sim -iq{1\\over \\sqrt{2\\omega_k}}{1\\over \\sqrt{2\\Omega}}\\Delta\\tau\\sqrt{\\pi}\\ne 0\n\\end{equation}\nso we pick up vacuum fluctuations in the detector. \n\nTo summarize, suppose we make a detector with frequency $\\Omega$ to pick up quanta of wavelength $\\lambda\\sim \\Omega^{-1}$. Then the effect of vacuum fluctuations will be comparable to the effect of `real quanta' if\n\\begin{equation}\n\\Delta\\tau \\lesssim {1\\over (\\Omega+\\omega_k)} < {1\\over \\Omega}\\sim \\lambda\n\\end{equation}\n\n\n\n\\refstepcounter{section}\n\\section*{\\thesection \\quad Wavelength of Hawking quanta} \\label{wavelength}\n\n\n\nConsider the Schwarzschild black hole\n\\begin{equation}\nds^2=-(1-{2M\\over r})dt^{2}+{dr^2\\over 1-{2M\\over r}}+r^{2}{d\\Omega_2^{2}}\n\\label{oneq}\n\\end{equation}\nThe temperature is ${1\\over 8\\pi M}$, so the wavelength of Hawking quanta at infinity is $\\lambda_\\infty\\sim M$. The wavelength of such a quantum at any position $r$ is\n\\begin{equation}\n\\lambda\\sim (-g_{tt})^{1\\over 2}\\lambda_\\infty\\sim M(1-{2M\\over r})^{1\\over 2}\n\\end{equation}\nNear the horizon $(r-2M)\\ll 2M$ we can use Rindler coordinates\n\\begin{equation}\nt_R={t\\over 4M}, ~~r_R=\\sqrt{8M(r-2M)}\n\\label{sixt}\n\\end{equation}\nThis gives the metric in the time and radial directions\n\\begin{equation}\nds^2\\approx -r_R^2 dt_R^2+dr_R^2\n\\label{fift}\n\\end{equation}\nFrom now on we restrict attention to just these directions. \nIn this near-horizon region we have for the wavelength of radiated quanta\n\\begin{equation}\n\\lambda\\sim M^{1\\over 2} (r-2M)^{1\\over 2}\\sim r_R\n\\end{equation}\nFrom (\\ref{fift}) we see that the distance from the horizon measured on a constant $t_R$ slice is $d=r_R$. Thus if a black hole emits radiation at the Hawking temperature, then the wavelength of these quanta at a distance $d$ from the horizon is \n\\begin{equation}\n\\lambda\\sim d\n\\end{equation}\nThis is the wavelength measured along a slice of constant Schwarzschild time $t$. If this quantum is encountered by an infalling detector, then the effective wavelength will be Lorentz contracted. Let the proper velocity of the detector \nin a local Lorentz frame oriented along the Schwarzschild $t, r$ directions, be \n\\begin{equation}\nU^{\\hat t}=\\cosh\\alpha, ~~~U^{\\hat r}=-\\sinh\\alpha\n\\label{eqalpha}\n\\end{equation}\nThe momentum vector of an outgoing massless quantum in the local Lorentz frame is\n\\begin{equation}\n(p^{\\hat t}, p^{\\hat r})\\sim ({1\\over \\lambda}, {1\\over \\lambda})\n\\end{equation}\nThe energy of the quantum as measured by the detector is then\n\\begin{equation}\nE=-p_\\mu U^\\mu\\sim {1\\over \\lambda}(\\cosh\\alpha+\\sinh\\alpha)={1\\over \\lambda} e^\\alpha\n\\end{equation}\nand the effective wavelength that is seen by the infalling detector is then\n\\begin{equation}\n\\lambda_{eff}\\sim \\lambda e^{-\\alpha}\\sim d e^{-\\alpha}\n\\label{sevent}\n\\end{equation}\nwhere as above, $d$ is the distance measured from the horizon in the Schwarzschild frame along a $t=const$ slice.\n\n\n\n\n\\refstepcounter{section}\n\\section*{\\thesection \\quad Proper time along infalling geodesic} \\label{time}\n\n\n\nWe wish to ask how much proper time $\\Delta \\tau$ elapses along a geodesic between the time it is at a distance $d$ from $r=2M$ and the time it hits the black hole surface at $r=2M$. Since we are working near the horizon, we use the Rindler coordinates (\\ref{sixt}). The Kruskal-type coordinates appropriates to a freely falling observer are given locally by taking the Minkowski coordinates related to $t_R, r_R$ by\n\\begin{equation}\nt_M=r_R \\sinh t_R, ~~~x_M=r_R \\cosh t_R\n\\end{equation}\n\n \\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.65]{fz3.eps}\n\\end{center}\n\\caption{The Rindler coordinates near the horizon, and the corresponding Minkowski coordinates. The infalling geodesic starts at $r_R=d, t_R=0$ and ends at the Rindler horizon.}\n\\label{fz3} \n\\end{figure}\n\nFig.\\;\\ref{fz3} shows the geodesic that we follow. This geodesic is a straight line in the local Minkowski coordinates\n\\begin{equation}\nt_M=\\cosh\\alpha ~\\tau, ~~~x_M=-\\sinh\\alpha~\\tau+d\n\\end{equation}\nWe have taken the geodesic to start with $\\tau=0$ at position $r_R=d$ and time $t_R=0$.\nHere $\\alpha$ is a constant the gives the velocity of infall; note that it is the same $\\alpha$ as the one that appears in (\\ref{eqalpha}). The geodesic crosses the horizon $t_M=r_M$ at proper time $\\tau_f$ with\n\\begin{equation}\n\\cosh\\alpha ~\\tau_f =-\\sinh\\alpha ~ \\tau_f + d, ~~~\\Rightarrow~~~\\tau_f=d e^{-\\alpha}\n\\end{equation}\n Thus if an observer on an infalling trajectory tries to detect a quantum at distance $d$ from the horizon, then the time he has available to make the detection is\n\\begin{equation}\n\\Delta\\tau_{available}< \\tau_f=d e^{-\\alpha}\n\\end{equation}\n\n\\end{appendix}\n\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzcgqp b/data_all_eng_slimpj/shuffled/split2/finalzzcgqp new file mode 100644 index 0000000000000000000000000000000000000000..f2c2c611f4527f36f864dfc48238444e517876ea --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzcgqp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nHigher-spin (HS) gauge theory describes interacting systems of massless fields of all spins\n(for reviews see e.g. \\cite{{Vasiliev:Golfandmem},Review4}).\n Effects of HS gauge theories are anticipated to play a role at ultra high energies of\nPlanck scale \\cite{Vasiliev:2016xui}.\nTheories of this class play a role in various contexts from holography \\cite{Klebanov:2002ja} to cosmology\n\\cite{Barv}. HS theory\ndiffers from usual local field theories because it contains\ninfinite tower of gauge fields of all spins and the number of space-time derivatives increases with the spins\nof fields in the vertex \\cite{Bengtsson:1983pd,Berends:1984wp,Fradkin:1987ks,Fradkin:1991iy}.\nHowever one may ask for spin-locality \\cite{Vasiliev:2016xui,Gelfond:2017wrh,4a1,4a2} which implies\nspace-time locality in the lowest orders of perturbation theory \\cite{4a1}. Even though details of the precise relation between spin-locality and space-time locality in higher orders of perturbation theory have not been yet elaborated, from the form of equations it is clear that spin-locality constraint provides one of the best tools to minimize the space-time non-locality. Moreover demanding spin-locality one actually fixes functional space for possible field redefinitions that is highly important for\nthe predictability of the theory.\n\n\nA useful way of description of HS dynamics is provided by the\ngenerating Vasiliev system of HS equations \\cite{more}. The latter {contains a free complex parameter\n $\\eta$. Solving the generating system order by order one obtains vertices\nproportional to various powers of $\\eta$ and $\\bar{\\eta}$. In the recent paper \\cite{4a3},\n$\\eta^2$ and $\\bar{\\eta}^2$ vertices were obtained in the sector of equations for zero-form fields,\ncontaining, in particular, a part of the $\\phi^4$ vertex for the scalar field $\\phi$\n in the theory. Though being seemingly $Z$-dependent, in \\cite{4a3} these vertices were written\n in the $Z$-dominated form which implies their spin-locality by virtue of $Z$-dominance\n Lemma of \\cite{2a1}. In this paper we obtain explicit $Z$-independent spin-local form for\n the vertex $\\Upsilon^{\\eta\\eta}_{\\go CCC}$ starting from the $Z$-dominated expression of \\cite{4a3}.\n The label $\\go CCC$ refers to the $\\go CCC$-ordered part of the vertex\n where $\\go$ and $C$ denote gauge one-form and field strength zero-form HS fields valued\n in arbitrary associative algebra in which case the order of the product factors in $\\go CCC$ matters.\n}\n\n\n\n\n\n\nThere are several ways to study the issue of (non)locality in HS gauge theory. One is reconstruction\nthe vertices from the boundary by the holographic prescription based on the Klebanov-Polyakov\n conjecture \\cite{Klebanov:2002ja} (see also\n\\cite{Sezgin:2002rt}, \\cite{SS}). Alternatively,\none can analyze vertices directly in the bulk starting from the generating equations\nof \\cite{more}. The latter approach developed in \\cite{4a1,4a2,4a3,2a1,2a2}\nis free from any holographic duality assumptions but demands careful choice of the\nhomotopy scheme to determine the choice of field variables compatible with spin-locality of the vertices.\nThe issue of (non)locality of HS gauge theories was also considered in\n\\cite{Fotopoulos:2010ay} and \\cite{David:2020ptn} with somewhat opposite conclusions.\n\n\nFrom the holographic point of view the vertex that contains $\\phi^4$ was argued to be essentially\nnon-local \\cite{Sleight:2017pcz} or at least should have non-locality of very specific form presented\nin \\cite{Ponomarev:2017qab}. On the other hand, the holomorphic, \\ie\n$\\eta^2$ and antiholomorphic $\\bar\\eta^2$ vertices, where $\\eta$ is a complex parameter in\nthe HS equations, were recently obtained in \\cite{4a3} where they were\nshown to be spin-local by virtue of $Z$-dominance lemma of \\cite{2a1}.\nThe computation was done directly in the bulk starting from the non-linear HS system of \\cite{more}.\n\n\nIn this formalism HS fields are described by one-forms\n$\\omega (Y;K|x) $ and zero-forms $C(Y;K|x)$ where $x$ are space-time coordinates while\n$Y_A=(y_\\ga,\\by_{\\dot \\ga})$ are auxiliary spinor variables.\nBoth dotted and undotted indices are two-component, $\\ga, {\\dot{\\ga}=1,2}$, while $K=(k,\\bar k)$ are outer\nKlein operators satisfying $k*k= \\bar{k}* \\bar{k}=1$\\,,\n \\bee\\label{Klein}&&\n\\lbrace k,y^\\ga\\rbrace_\\ast=\\lbrace k, z^\\ga \\rbrace_\\ast=\n\\lbrace \\bar{k},\\bar{y}^{\\dot{\\ga}}\\rbrace_\\ast=\\lbrace \\bar{k},\\bar{z}^{\\dot{\\ga}}\n\\rbrace_\\ast=\\lbrace k,\\theta^\\ga\\rbrace_\\ast=\\lbrace \\bar{k},\\bar{\\theta}^{\\dot{\\ga}}\\rbrace_\\ast=0,\n\\\\\\nn&&[k,\\bar{y}^{\\dot{\\ga}}]_\\ast=[ k, \\bar{z}^{\\dot{\\ga}}]_\\ast=\n[\\bar{k},y^\\ga]_\\ast=[ \\bar{k},z^\\ga]_\\ast=[k,\\bar{\\theta}^{\\dot{\\ga}}]_\\ast=[ \\bar{k},\\theta^\\ga]_\\ast=0\\,,\n\\eee\nwhere $\\theta $ and $\\bar \\theta $ are anticommuting spinors in the theory.\n\nSchematically, non-linear HS equations in the unfolded form read as\n\\begin{equation}\\label{oneform}\n\\dr_x \\go + \\go \\ast \\go=\\Upsilon(\\go,\\go,C)+\\Upsilon(\\go,\\go,C,C)+\\ldots,\n\\end{equation}\n\\begin{equation}\\label{zeroform}\n\\dr_x C+\\go \\ast C-C\\ast \\go=\\Upsilon(\\go,C,C)+\\Upsilon(\\go,C,C,C)+\\ldots.\n\\end{equation}\n\n\n\n\n\nAs recalled in Section \\ref{HSeq}, generating equations of \\cite{more} that reproduce the form of\nequations (\\ref{oneform}) and (\\ref{zeroform}) have a simple form as a result of\ndoubling of spinor variables,\nnamely $$\n\\go(Y;K|x)\\longrightarrow W(Z;Y;K|x)\\,,\\qquad\nC(Y;K|x) \\longrightarrow B(Z;Y;K|x). $$\nEquations\n\\eq{oneform} and \\eq{zeroform} result from the generating equations of \\cite{more} upon\norder by order reconstruction of $Z$-dependence (for more detail see Section \\ref{HSeq}).\nThe final form of equations (\\ref{oneform}) and (\\ref{zeroform}) turns out\nto be $Z$-independent as a consequence of consistency of the equations of \\cite{more}. This fact may not be\nmanifest however since the \\rhss of HS equations\nusually have the form of the sum of $Z$-dependent terms.\n\nHS equations have remarkable property \\cite{Vasiliev:1988sa} that they remain\nconsistent with the fields $W$ and $B$ valued in any associative algebra. For instance\n$W$ and $B$ can belong to the matrix algebra $Mat_n $ with any $n$. Since in that\ncase the components of $W$ and $B$ do not commute, different orderings of the fields\nshould be considered independently.\n(Mathematically, HS equations with this property correspond to $A_\\infty $ strong homotopy\n algebra introduced by Stasheff in \\cite{stash1},\\cite{stash2},\\cite{stash3}.)\nFor instance, holomorphic (\\ie $\\bar\\eta$-independent) vertices in the zero-form sector can be represented in the form\n\\be \\label{FieldOrdering}\n\\Upsilon^{\\eta }(\\go,C,C )=\\Upsilon^{ \\eta}_{\\go CC }+\\Upsilon^{ \\eta}_{C\\go C }+\\Upsilon^{ \\eta}_{CC\\go}\n\\,,\\quad\n\\Upsilon^{\\eta\\eta}(\\go,C,C,C)=\\Upsilon^{\\eta\\eta}_{\\go CCC}+\\Upsilon^{\\eta\\eta}_{C\\go CC}\n+\\Upsilon^{\\eta\\eta}_{CC\\go C}+\\Upsilon^{\\eta\\eta}_{CCC\\go }\\,,\\,\\,\\ldots\n\\ee\nwhere the subscripts of the vertices $\\Upsilon$ refer to the ordering of the product\nfactors.\n\n\nThe vertices obtained in \\cite{4a3} were shown to be\nspin-local due to the $Z$-dominance Lemma of \\cite{2a1} that identifies terms that\nmust drop from the \\rhss of HS equations together with the $Z$-dependence.\nRecall that spin-locality implies that the vertices are local in terms of spinor variables for\nany finite subset of fields of different spins \\cite{2a2} (for more detail on the notion of spin-locality see \\cite{2a2}).\n Analogous vertices in the one-form sector have been shown to be spin-local earlier in \\cite{4a2}.\n\n The main achievement of \\cite{4a3} consists of finding such solution of the generating\n system in the third order in $C$ that all spin-nonlocal terms containing infinite towers\n of derivatives in $y(\\bar y)$ between $C$-fields in the (anti)holomorphic\n in $\\eta(\\bar \\eta)$ sector do not contribute to\n $\\eta^2$ ($\\bar \\eta^2$) vertices by virtue of\n $Z$-dominance Lemma. Thus \\cite{4a3} gives spin-local expressions for the vertices\n $\\Upsilon^{\\eta\\eta}(\\go,C,C,C)$ which, however, have a form\n of a sum of a number of $Z$-dependent terms. To make spin-locality\n manifest one must remove the seeming Z-dependence from the vertex of \\cite{4a3}.\n Technically, this can be done with the help of partial integration and the Schouten identity.\n The aim of this paper is to show how this works in practice.\n\n\nSince the straightforward derivation presented in this paper\nis technically involved we confine ourselves to the\nparticular vertex $\\Upsilon^{\\eta\\eta}_{\\go CCC}$ \\eq{FieldOrdering}.\n{ Complexity of the calculations in this paper expresses complexity of the\nobtained vertex having no analogues in the literature. Indeed, this is explicitly calculated\nspin-local vertex\nof the third order in the equations, corresponding to the vertices\nof the fourth (and, in part, fifth) order for the fields of all spins.\nThe example described in the paper explains the formalism applicable to all other orderings\nof the fields in the vertex that are also computable. So, our results are most important from the general\n point of view\n highlighting a way for the computation of higher vertices in HS theory that may\n be important from various perspectives and, in the first place, for the analysis of HS holography.\nIt should be stressed that the results of \\cite{4a3} provided a sort of existence theorem for a spin-local\nvertex that was difficult to extract without developing specific tools like those developed in\nthis paper.\nIn particular, it is illustrated how the general statements\nlike $Z$-dominance Lemma work in practical computations. Let us stress that\nat the moment this is the only available approach allowing to compute explicit form\nof the spin-local vertices for all spins at higher orders.}\n\n\nThe rest of the paper is organized as follows. In Section \\ref{HSeq}, the necessary background on HS equations\nis presented with brief recollection on the procedure of derivation of vertices from the generating system.\n Section \\ref{SectionHplus}\nreviews the notion of the $\\Hp$ space as well as the justification for a computation modulo $\\Hp$.\nIn Section \\ref{Schema}, we present step-by-step scheme of computations performed in this paper.\n Section \\ref{Main} contains the final\nmanifestly spin-local expression for $\\Upsilon^{\\eta\\eta}_{\\go CCC}$ vertex.\nIn Sections \\ref{zlinear}\\,, \\ref{SecGTid}\\,, \\ref{uniform}\\,, \\ref{Eli0} and \\ref{proof}\ntechnical details of the steps\nsketched in Section \\ref{Schema} are presented. In particular, in Section \\ref{SecGTid}\nwe introduce important {\\it Generalised Triangle identity} which allows us to uniformize expressions\nfrom \\cite{4a3}.\n Conclusion section contains discussion of the obtained results. Appendices A, B, C and D contain\n technical detail on the steps listed in\nthe scheme of computation.\n Some useful formulas are collected in Appendix E.\n\n\\section{ Higher Spin equations}\n\\label{HSeq}\n\\subsection{Generating equations}\n\nSpin-$s$ HS fields are encoded in two generating functions, namely, the space-time one-form\n\\begin{equation}\n\\omega(y,\\bar{y},x)=\\dr x^\\mu \\go_\\mu(y,\\bar{y},x)=\\sum_{n,m} \\dr x^{\\mu} \\go_{\\mu} {}_{\\ga_1 \\ldots \\ga_n,\n \\dot{\\ga}_1 \\ldots \\dot{\\ga}_m}(x) y^{\\ga_1} \\ldots y^{\\ga_n} \\bar{y}^{\\dot{\\ga}_1}\n \\ldots \\bar{y}^{\\dot{\\ga}_m}\n\\q s=\\ff{2+m+n}{2} \\end{equation} and zero-form\n\\begin{equation}\nC(y,\\bar{y},x)=\\sum_{n,m} C_{\\ga_1 \\ldots \\ga_n,\n\\dot{\\ga}_1 \\ldots \\dot{\\ga}_m}(x) y^{\\ga_1} \\ldots y^{\\ga_n}\n\\bar{y}^{\\dot{\\ga}_1} \\ldots \\bar{y}^{\\dot{\\ga}_m}\n\\q s=\\ff{|m-n|}{2}.\\end{equation}\nwhere $\\ga=1,2$ and $\\dot{\\ga}=1,2$ are two-component spinor indices.\n Auxiliary commuting variables $y^\\ga$ and $\\bar{y}^{\\dot \\ga}$\n can be combined into an $\\mathfrak{sp}(4)$ spinor $Y^A=(y^\\ga,\\bar{y}^{\\dot{\\ga}})$, $A=1,..., 4$.\n\nThe vertices $\\Upsilon(\\go,\\go,C,C,\\ldots)$ \\eqref{oneform} and $\\Upsilon(\\go,C,C,\\ldots)$\n\\eqref{zeroform} result from\n the generating system of \\cite{more}\n\\begin{equation}\\label{HS1}\n\\dr_x W+W\\ast W=0,\n\\end{equation}\n\\begin{equation}\\label{HS2}\n\\dr_x S+W\\ast S+S\\ast W=0,\n\\end{equation}\n\\begin{equation}\\label{HS3}\n\\dr_x B+W\\ast B- B\\ast W=0,\n\\end{equation}\n\\begin{equation}\\label{HS4}\nS\\ast S=i(\\theta^A \\theta_A+\\eta B\\ast \\gga+\\bar{\\eta} B\\ast \\bar{\\gga}),\n\\end{equation}\n\\begin{equation}\\label{HS5}\nS\\ast B-B\\ast S=0.\n\\end{equation}\nApart from space-time coordinates $x$, the fields $W(Z;Y;K|x)$, $S(Z;Y;K|x)$ and $B(Z;Y;K|x)$\n depend on $Y^A$, $Z^A=(z^\\ga,\\bar{z}^{\\dot{\\ga}})$ and Klein operators $K=(k,\\bar{k})$\n\\eq{Klein}. $W$ is a space-time one-form, \\ie $W= dx^\\nu W_\\nu$\nwhile $S$ -field is a one-form in $Z$ spinor directions\n$\\theta^A=(\\theta^\\ga,\\bar{\\theta}^{\\dot{\\ga}})$,\\quad $\\lbrace\\theta^A, \\theta^B\\rbrace=0$, \\ie\n\\begin{equation}\nS(Z;Y;K)=\\theta^A S_A(Z;Y;K).\n\\end{equation}\n$B$ is a zero-form.\n\nStar product is defined as follows\n\\begin{equation}\\label{StarZY}\n(f\\ast g)(Z;Y;K)=\\frac{1}{(2\\pi)^4}\\int d^4 U \\, d^4 V e^{iU_A V^A}f(Z+U,Y+U;K)g(Z-V,Y+V;K).\n\\end{equation}\n Elements\n \\begin{equation}\n\\gga=\\theta^\\ga \\theta_\\ga e^{iz_\\ga y^\\ga}k\\mbox{\\qquad and\\qquad}\n\\bar{\\gga}=\\bar{\\theta}^{\\dot{\\ga}}\\bar{\\theta}_{\\dot{\\ga}}\ne^{i\\bar{z}_{\\dot{\\ga}}\\bar{y}^{\\dot{\\ga}}}\\bar{k}\n\\end{equation}\nare central because $\\theta^3=0$ since $\\theta_\\ga$ is a two-component anticommuting spinor.\n\\subsection{Perturbation theory}\nStarting with a particular solution of the form\n\\begin{equation}\\label{solution}\nB_0(Z;Y;K)=0\\q S_0(Z;Y;K)=\\theta^\\ga z_\\ga+\\bar{\\theta}^{\\dot{\\ga}}\\bar{z}_{\\dot{\\ga}}\\q\n W_0(Z;Y;K)=\\omega(Y;K)\\,,\n\\end{equation}\nwhich indeed solves \\eqref{HS1}-\\eqref{HS5} provided that $\\go(Y;K)$ satisfies zero-curvature condition,\n\\be\n\\dr \\go +\\go*\\go=0\\,,\n\\ee\n one develops perturbation theory. Starting from \\eqref{HS5} one finds\n\\begin{equation}\\label{1order}\n[S_0,B_1]_*=0.\n\\end{equation}\nFrom \\eqref{StarZY} one deduces that\n\\begin{equation}\n[Z_A,f(Z;Y;K)]_\\ast=-2i\\frac{\\p}{\\p Z^A} f(Z;Y;K).\n\\end{equation}\nHence, equation (\\ref{1order}) yields\n\\begin{equation}\n[S_0,B_1]=-2i \\theta^A \\frac{\\p}{\\p Z^A}B_1=-2i \\dr_Z B_1=0 \\; \\Longrightarrow\\; B_1(Z;Y;K)=C(Y;K).\n\\end{equation}\nThe $Z$-independent $C$-field that appears as the first-order part of $B$ is the same\n that enters equations \\eqref{oneform}, \\eqref{zeroform}. The perturbative procedure can be\n continued further leading to the equations of the form\n\\begin{equation}\n\\dr_Z \\Phi_{k+1}=J(\\Phi_k, \\Phi_{k-1},\\ldots)\\,,\n\\end{equation}\nwhere $\\Phi_k$ is either $W$, $S$ or $B$ field of the $k$-th order of perturbation theory,\n identified with the degree of $C$-field in the corresponding expression, \\ie\n\\bee\\nn&&\nW=\\go+W_1(\\go,C)+W_2(\\go,C,C)+\\ldots\\q S=S_0+S_1(C)+S_2(C,C)+\\ldots,\n\\\\&&\\nn B=C+B_2(C,C)+B_3(C,C,C)+\\ldots.\n\\eee\nTo obtain dynamical equations \\eqref{oneform}, \\eqref{zeroform} one should plug obtained\n solutions into equations \\eqref{HS1} and \\eqref{HS3}. For instance,\n \\eqref{HS3} up to the third order in $C$-field is\n\\begin{equation}\\label{B3EQ}\n\\dr_x C+[\\go,C]_\\ast=-\\dr_x B_2-[W_1,C]_\\ast-\\dr_x B_3-[W_1,B_2]_\\ast-[W_2,C]_\\ast+\\ldots\n\\end{equation}\nThough the fields $W_1$, $W_2$ and $B_2$, $B_3$ and hence various terms that enter (\\ref{B3EQ})\nare $Z$-dependent, equations \\eqref{HS1}-\\eqref{HS5} are designed in such a way that, as a consequence\nof their consistency, the sum of the terms on the \\rhs\nof (\\ref{B3EQ}) is $Z$-independent. To see this it suffices to apply $\\dr_Z$\nrealized as $\\ff{i}{2} [S_0\\,,\\quad ]_*$ to the \\rhs\nof (\\ref{B3EQ}) and make sure that it gives zero by virtue of already solved equations.\nFor more detail we refer the reader to the review \\cite{Review4}.\n\n\\section{ Subspace $\\Hp$ and $Z$-dominance lemma}\n\\label{SectionHplus}\n\n\\subsection{$\\Hp$}\n\nIn this Section the definition of the space $\\Hp$ \\cite{4a3} that plays a\ncrucial role in our computation is recollected.\nFunction $f(z,y\\vert \\theta)$ of the form\n\\begin{equation}\n\\label{class}\nf(z,y\\vert \\theta)=\\int_0^1 d\\mathcal{T}\\, e^{i\\mathcal{T}z_\\ga y^\\ga}\\phi\n\\left(\\mathcal{T}z,y\\vert \\mathcal{T} \\theta,\\mathcal{T}\\right)\\,\n\\end{equation}\n belongs to the space $\\Hp$ if there exists\n such a real $\\varepsilon>0$, that\n\\begin{equation}\\label{limit}\n\\lim_{\\mathcal{T}\\rightarrow 0}\\mathcal{T}^{1-\\varepsilon}\\phi(w,u\\vert\n\\theta,\\mathcal{T})=0\\,.\n\\end{equation}\nNote that this definition does not demand\nany specific behaviour of $\\phi$ at $\\mathcal{T}\\to1$ as was the case for the\n space $\\Sp^{+0}$ of \\cite{2a2}.\n\n\n In the sequel we use two main types of functions that obey \\eqref{limit}:\n\\begin{equation}\\label{kernels}\n\\phi_1(\\mathcal{T}z,y\\vert \\mathcal{T} \\theta,\n\\mathcal{T})=\\frac{\\mathcal{T}^{\\delta_1}}{\\mathcal{T}}\\widetilde{\\phi}_1(\\mathcal{T}z,y\\vert\n\\mathcal{T} \\theta)\\q \\phi_2(\\mathcal{T} z,y\\vert\n\\mathcal{T}\\theta,\n\\mathcal{T})=\\vartheta(\\mathcal{T}-\\delta_2)\\frac{1}{\\mathcal{T}}\\widetilde{\\phi}_2(\\mathcal{T}z,y\\vert\n\\mathcal{T} \\theta)\n\\end{equation}\nwith some $\\delta_{1,2}>0$. (Note that the second option with $\\delta_2>0$ can be\ninterpreted as the first one with arbitrary large $\\delta_1$. Here step-function is denoted as $\\vartheta$\nto distinguish it from the anticommuting variables $\\theta$.)\n\nSpace $\\Hp$ can be represented as the direct sum\n\\begin{equation}\n\\Hp=\\Hp_0 \\oplus \\Hp_1 \\oplus \\Hp_2\\,,\n\\end{equation}\nwhere $\\phi(w,u\\vert\\theta,\\mathcal{T})\\in\\Hp_p$ are degree-$p$ forms in $\\theta$ satisfying \\eqref{limit}.\n\n\nAll terms from $\\Hp$ on the \\rhs of HS field equations must vanish by $Z$-dominance Lemma \\cite{2a1}.\nFollowing \\cite{4a3} this can be understood as follows. All the expressions\nfrom \\eqref{B3EQ} have the form \\eqref{class} and the only way to obtain $Z$-independent non-vanishing\n expression is to bring the hidden $\\T$ dependence in $\\phi(\\T z,y\\vert \\T \\theta , {\\T})$\n to $\\delta(\\T)$. If a function contains an additional factor of\n$\\mathcal{T}^\\gvep$ or is isolated from $\\T=0$, it cannot contribute to the $Z$-independent\nanswer\n which is the content of $Z$-dominance Lemma \\cite{2a1}.\nThis just means that functions of the class $\\Hp_0$ cannot\ncontribute to the $Z$-independent equations \\eqref{zeroform}.\nApplication of this fact to locality is straightforward once this is\nshown that all terms containing\ninfinite towers of higher derivatives in the vertices of interest\nbelong to $\\Hp_0$ and, therefore, do not contribute to HS\nequations. This is what was in particular shown in \\cite{4a3}.\n\n\n\n\n\n\n\n\\subsection{Notation}\nAs in \\cite{4a3} we use \\textit{exponential} form for all the expressions below where by $\\go CCC$ we assume\n\\begin{equation}\n\\omega(\\mathsf{y}_\\go,\\bar{y})\\bar{\\ast}C(\\mathsf{y}_1,\\bar{y})\\bar{\\ast}C(\\mathsf{y}_2,\\bar{y})\\bar{\\ast}C(\\mathsf{y}_3,\\bar{y})\n\\end{equation}\nwith $\\bar{\\ast}$ denoting star-product with respect to $\\bar{y}$.\nDerivatives $\\p_\\go$ and $\\p_j$ act on auxiliary variables as follows\n\\begin{equation}\n\\p_{\\go\\ga}=\\frac{\\p}{\\p \\mathsf{y}_\\go^\\ga}\\q \\p_{j\\ga}=\\frac{\\p}{\\p \\mathsf{y}_j^\\ga}.\n\\end{equation}\nAfter all the derivatives in $\\mathsf{y}_\\go$ and $\\mathsf{y}_j$ are evaluated the latter are set to zero, \\ie\n\\begin{equation}\n\\mathsf{y}_\\go=\\mathsf{y}_j=0.\n\\end{equation}\nIn this paper we use the following notation of \\cite{4a3}:\n\\begin{equation}\nt_\\ga:=-i\\p_{\\go\\ga},\\;\\; p_{j\\ga}:=-i\\p_{j\\ga}\\q\n\\end{equation}\n \\be\\label{ro+}\n\\int d^n \\rho_+ :=\\int d\\rho_1 \\ldots d\\rho_n\\, \\vartheta(\\rho_1)\\ldots \\vartheta(\\rho_n)\\,.\n\\end{equation}\n\n\\subsection{Contribution to ${\\Upsilon}^{\\eta\\eta} _{\\go CCC}$ modulo $\\Hp$}\n\nThe $\\eta^2C^3$ vertex in the equations on the\nzero-forms $C$ resulting from equations of \\cite{more}\nis \\begin{equation}\\label{rightside} \\Upsilon^{\\eta\\eta}(\\go,C,C,C) =-\\left(\\dr_x B^{ \\eta \\eta }_3 + [\\go, B^{\\eta\\eta }_3]_* + [ {W}^\\eta_1, B^{\\eta }_2]_*\n +[ {W}^{ {\\eta}\\eta}_2, C]_*\n+\\dr_x B^{ \\eta }_2\\,\\right). \\end{equation}\n Recall, that, being $Z$-independent, ${\\Upsilon}^{\\eta\\eta} $ is a sum of $Z$-dependent terms\nthat makes its $Z$-independence implicit.\n\nAs explained in Introduction, ${\\Upsilon}^{\\eta\\eta}$ can be decomposed into parts\nwith different orderings of fields $\\go$ and $C$. In this paper we consider\n \\be\\label{projwccc}{\\Upsilon}^{\\eta\\eta}_{\\go C C C} := \\Upsilon^{\\eta\\eta}(\\go,C,C,C)\\Big|_{\\go CCC}\\,.\n \\ee\n Since the terms from $\\Hp$ do not contribute to the physical\nvertex such terms can be discarded. Following \\cite{4a3} equality up to terms from $\\Hp$\nreferred to as weak equality\nis denoted as $\\approx$ .\n\n We start with the following results of \\cite{4a3}:\n \\be\\label{rightsideU\n \\widehat{\\Upsilon}^{\\eta\\eta}_{\\go CCC}\n\\approx {\\Upsilon}^{\\eta\\eta}_{\\go C C C}=-\\Big(W_{1\\, \\go C}^\\eta \\ast B_2^{\\eta\\, loc}+ {W}_{2\\, \\go CC}^{\\eta\\eta}\\ast C+\n\\dr_x B^{\\eta\\, loc}_2\\big|_{\\go CCC}+\n\\omega\\ast {B}_3^{\\eta\\eta}+\\dr_x {B}_3^{\\eta\\eta}\\big|_{\\go CCC} \\Big)\n \\q\n\\ee where\n \\begin{multline}\\label{origW1B2}\nW_{1\\, \\go C}^\\eta \\ast B_2^{\\eta\\, loc}\\approx \\frac{\\eta^2}{4}\\int_0^1 d\\mathcal{T} \\T \\int_0^1 d\\gs\n \\int d^3\\rho_+ \\delta\\left(1-\\sum_{i=1}^3 \\rho_i\\right) \\frac{\\left(z_\\gga t^{ \\gga}\\right)\\big[z_\\ga y^\\ga+\\gs z_\\ga t^{\\ga}\\big]}{(\\rho_1+\\rho_2)}\n \\times\\\\\n\\times \\exp\\Big\\{i\\mathcal{T} z_\\ga y^\\ga+i(1-\\gs )t^{\\ga}\\p_{1\\ga}\n-i\\frac{\\rho_1\\gs }{\\rho_1+\\rho_2} t^{\\ga}p_{2\\ga}\n+i\\frac{\\rho_2\\gs }{\\rho_1+\\rho_2} t^{\\ga}p_{3\\ga} \\\\\n+i\\mathcal{T}z^\\ga\\Big(-(\\rho_1+\\rho_2+\\gs \\rho_3)t_{\\ga}-(\\rho_1+\\rho_2)p_{1 \\ga}\n+(\\rho_3-\\rho_1)p_{2 \\ga}+(\\rho_3+\\rho_2)p_{3\\ga}\\Big) \\\\\n+iy^\\ga\\Big(\\gs t_{\\ga}-\\frac{\\rho_1}{\\rho_1+\\rho_2}p_{2 \\ga}\n+\\frac{\\rho_2}{\\rho_1+\\rho_2}p_{3\\ga}\\Big)\\Big\\}\\go CCC\\,,\n\\end{multline}\n\n\\begin{multline}\\label{origW2C}\n {W}_{2\\, \\go CC}^{\\eta\\eta}\\ast C\\approx-\\frac{\\eta^2}{4}\\int_0^1 d\\mathcal{T}\\,\\T\n \\int d^4\\rho_+\\, \\delta\\left(1-\\sum_{i=1}^4 \\rho_i\\right)\n \\frac{\\rho_1 \\left(z_\\gga t^{\\gga}\\right)^2}{(\\rho_1+\\rho_2)(\\rho_3+\\rho_4)}\\times\\\\\n\\times \\exp\\Big\\{i\\mathcal{T}z_\\ga y^\\ga+i\\mathcal{T}z^\\ga\\Big((1-\\rho_2)t_{\\ga}\n-(\\rho_3+\\rho_4)p_{1\\ga}+(\\rho_1+\\rho_2)p_{2 \\ga}+p_{3 \\ga}\\Big)+i y^\\ga t_{\\ga} \\\\\n+\\frac{\\rho_1\\rho_3}{(\\rho_1+\\rho_2)(\\rho_3+\\rho_4)}\\left(i y^\\ga t_{ \\ga}\n+it^{ \\ga}p_{3\\ga}\\right)+i\\left(\\frac{(1-\\rho_4)\\rho_2}{\\rho_1+\\rho_2}\n+\\rho_4\\right)t^{\\ga}p_{1\\ga}-i\\frac{\\rho_4\\rho_1}{\\rho_3+\\rho_4}t^\\ga p_{2\\ga}\\Big\\} \\go CC C,\n\\end{multline}\n\n\\begin{multline}\\label{FFFFFFFFk}\n\\dr_x B^{\\eta\\, loc}_2\\big|_{\\go CCC}\\approx \\frac{\\eta^2}{4}\\int_0^1 d\\mathcal{T}\n\\int_0^1 d\\xi\\int d^3\\rho_+\\,\n \\delta\\left(1-\\sum_{i=1}^3\\rho_i\\right)\\left(z_\\ga y^\\ga\\right)\\Big[\\left(\\mathcal{T}z^\\ga\n -\\xi y^\\ga\\right)t_{ \\ga}\\Big]\\times\\\\\n\\times \\exp\\Big\\{i\\mathcal{T}z_\\ga y^\\ga+i(1-\\rho_2)t^\\ga p_{1\\ga}\n-i\\rho_2 t^\\ga p_{2\\ga} +i\\mathcal{T}z^\\ga\\Big(-(\\rho_1+\\rho_2)t_{\\ga}\n-\\rho_1 p_{1 \\ga}+(\\rho_2+\\rho_3)p_{2 \\ga}+p_{3\\ga}\\Big) \\\\\n+iy^\\ga\\Big(\\xi(\\rho_1+\\rho_2)t_{\\ga}+\\xi\\rho_1 p_{1 \\ga}\n-\\xi(\\rho_2+\\rho_3)p_{2 \\ga}+(1-\\xi)p_{3\\ga}\\Big) \\Big\\}\\go CCC\\,,\n\\end{multline}\n\n\\begin{multline}\\label{wB3modH+}\n\\omega\\ast {B}_3^{\\eta\\eta}\\approx-\\frac{\\eta^2}{4} \\int_0^1 d\\mathcal{T}\\, \\mathcal{T}\n \\int d^3 \\rho_+ \\delta\\left(1-\\sum_{i=1}^3 \\rho_i\\right) \\int_0^1 d\\xi\\, \\frac{\\rho_1\\,\n\\left[z_\\ga\\left(y^\\ga+t^\\ga\\right)\\right]^2 }{(\\rho_1+\\rho_2)(\\rho_1+\\rho_3)}\\times\\\\\n\\times\\exp\\Big\\{i\\mathcal{T}z_\\ga y^\\ga\n+i\\mathcal{T} z^\\ga\\Big(-t_{ \\ga}-(\\rho_1+\\rho_3)p_{1\\ga}+(\\rho_2-\\rho_3)p_{2\\ga}\n+(\\rho_1+\\rho_2)p_{3\\ga}\\Big)+iy^\\ga t_{\\ga}\\\\\n+i(1-\\xi)y^\\ga\\left(\\frac{\\rho_1}{\\rho_1+\\rho_2}p_{1\\ga}\n-\\frac{\\rho_2}{\\rho_1+\\rho_2}p_{2\\ga}\\right)\n+i\\xi\\, y^\\ga\\left(\\frac{\\rho_1}{\\rho_1+\\rho_3}p_{3\\ga}-\\frac{\\rho_3}{\\rho_1+\\rho_3}p_{2\\ga}\\right) \\\\\n+i\\frac{(1-\\xi)\\rho_1}{\\rho_1+\\rho_2}t^{\\ga}p_{1\\ga}\n-i\\left(\\frac{(1-\\xi)\\rho_2}{\\rho_1+\\rho_2}+\\frac{\\xi\\rho_3}{\\rho_1+\\rho_3}\\right) t^{\\ga}p_{2\\ga}\n+i\\frac{\\xi\\rho_1}{\\rho_1+\\rho_3}t^\\ga p_{3\\ga}\\Big\\} \\go CCC,\n\\end{multline}\n\n\n\\begin{multline}\\label{kuku5}\n\\dr_x {B}_3^{\\eta\\eta}\\big|_{\\go\nCCC}\\approx \\frac{\\eta^2}{4} \\int_0^1 d\\mathcal{T}\\, \\mathcal{T}\n\\int d^3 \\rho_+ \\delta\\left(1-\\sum_{i=1}^3 \\rho_i\\right) \\int_0^1\nd\\xi\\, \\frac{\\rho_1\\, (z_\\ga y^\\ga)^2\n }{(\\rho_1+\\rho_2)(\\rho_1+\\rho_3)}\\times\\\\\n\\times\\exp\\Big\\{i\\mathcal{T}z_\\ga y^\\ga+i\\mathcal{T} z^\\ga\n\\Big(-(\\rho_1+\\rho_3)(t_{\\ga}+p_{1\\ga})+(\\rho_2-\\rho_3)p_{2\\ga}\n+(\\rho_1+\\rho_2)p_{3\\ga}\\Big)+it^\\ga p_{1\\ga} \\\\\n+i(1-\\xi)y^\\ga\\left(\\frac{\\rho_1}{\\rho_1+\\rho_2}(t_{ \\ga}+p_{1\\ga})-\\frac{\\rho_2}{\\rho_1+\\rho_2}p_{2\\ga}\\right)+\\xi\\, y^\\ga\\left(\\frac{\\rho_1}{\\rho_1+\\rho_3}p_{3\\ga}-\\frac{\\rho_3}{\\rho_1+\\rho_3}p_{2\\ga}\\right)\\Big\\}\\go CCC.\n\\end{multline}\nThe sum of \\rhss of \\eq{origW1B2}-\\eq{kuku5} yields $\\widehat{\\Upsilon}^{\\eta\\eta}_{\\go CCC} (Z;Y) $.\n\n Note, that all terms on the \\rhss of \\eq{origW1B2}-\\eq{kuku5} contain no\n$p_j{}_\\ga p_i{}^\\ga$ contractions in the exponentials, hence being spin-local\n \\cite{4a3}. Thus $\\widehat{\\Upsilon}^{\\eta\\eta}_{\\go CCC} (Z;Y) $ is also spin-local.\n\n\nLet us emphasize that only the full expression for $\\Upsilon^{\\eta\\eta}_{\\go CCC}(Y) $ \\eq{projwccc}\nis $Z$-independent, while $\\widehat{\\Upsilon}^{\\eta\\eta}_{\\go CCC} (Z;Y) $ \\eqref{rightsideU}\nwith discarded terms in $\\Hp$ is not.\nThis does not allow one to find manifestly\n$Z$-independent expression for $ {\\Upsilon}^{\\eta\\eta}_{\\go CCC} $ by setting for instance\n $Z=0$ in Eqs.~\\eq{origW1B2}-\\eq{kuku5}.\n\n\n\n In this paper $Z$-dependence of $\\widehat{\\Upsilon}^{\\eta\\eta}_{\\go CCC}(Z;Y)$\nis eliminated modulo terms in $\\Hp$ by virtue of partial integration\n and the Schouten identity. As a result,\n $\n \\widehat{\\Upsilon}^{\\eta\\eta}_{\\go CCC}(Z;Y)\\approx \\widehat{\\widehat{\\Upsilon}} {\\,}^{\\eta\\eta}_{\\go CCC}(Y),$$\n where $\\widehat{\\widehat{\\Upsilon}} {\\,}^{\\eta\\eta}_{\\go CCC}(Y)$ is manifestly spin-local and $Z$-independent.\n Since $\\Hp_0$-terms do not contribute to the vertex by Z-dominance Lemma \\cite{2a1}\n\n $$\\Upsilon^{\\eta\\eta}_{\\go CCC}(Y)=\\widehat{\\widehat{\\Upsilon}} {\\,}^{\\eta\\eta}_{\\go CCC}(Y)\\,.$$\n\n\n Our goal is to find the manifest form of\n$\\widehat{\\widehat{\\Upsilon}} {\\,}^{\\eta\\eta}_{\\go CCC}(Y)$.\n\n \\section{Calculation scheme}\n\\label{Schema}\n\n\n\n\n\nThe calculation scheme is as follows.\n\n\\begin{itemize}\n\n\\item I. We start from the expression Eqs.~\\eq{origW1B2}-\\eq{kuku5} for the vertex obtained in \\cite{4a3}.\n\n\\bigskip\n\n\\item II. To $z$-linear pre-exponentials. \\\\\n Using partial integration and the Schouten identity\nwe transform Eqs.~\\eq{origW1B2}-\\eq{kuku5} to the form with $z$-linear pre-exponentials modulo\nweakly $Z$-independent\n (cohomology) terms.\nThese expressions are collected in Section \\ref{zlinear}, Eqs.~\\eq{RRwB3modH+}-\\eq{W2C3gr1}.\nThe respective cohomology terms being a part of the vertex\n$\\Upsilon^{\\eta\\eta}_{\\go CCC} $\nare presented in Section \\ref{Main}\\,.\n\n\\bigskip\n\\item\nIII. Uniformization.\\\\ We observe that the \\rhss of Eqs.~\\eq{RRwB3modH+}-\\eq{W2C3gr1}\ncan be re-written modulo cohomology and weakly zero terms in a form\nof integrals $\\int d\\Gamma$ over the same integration domain $\\II$\n\\begin{equation}\\label{comexp}\n \\int d\\Gamma \\, z_\\ga f^\\ga (y,t,p_1,p_2,p_3\\vert \\T,\\xi_i,\\rho_i)\\Ee\\, \\go CCC\\,,\n\\end{equation}\nwhere the integrand contains an overall exponential function $\\Ee$\n\\begin{equation}\n\\label{Ee}\\Ee= \\Ez E,\n\\end{equation}\n\\be\n\\label{expz}\n \\Ez:=\\exp i\\Big\\{\\T z_{\\ga}(y + \\Pz{})^{\\ga} \\Big\\}\n\\q\\ee\n \\bee\n \\label{Egx=}\n &&E:=\\exp i \\Big\\{\n - \\gx_2 \\ff{\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )}\\,\\,\\big( y + \\Pz{}\\big)^\\ga y_{\\ga}\n\\\\ \\nn &&\n+ \\gx_1 \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)}\\big( y + \\Pz{}\\big)^\\ga\\tilde{t}{}_{\\ga}\n\\\\ \\nn &&+\\ff{ \\gr_3 }{(1-\\gr_1-\\gr_4 ) }\\,\\, ( p_3+p_2)^{\\ga}\n y_{\\ga}\n-\\ff{ \\gr_3 }{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )}\\,\\, \\gr_1 {t}{}^{\\ga} y_{\\ga}\n \\\\ \\nn &&\n + \\ff{ \\gr_1 }{(1-\\gr_3)} (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n + p_3{}_{\\ga} y^{\\ga} +p_1{}_\\ga {t}{}^\\ga\n\\Big\\}\n\\,, \\eee\n\\bee &&\\label{tildet}\\tilde{t}{}=\\ff{\\gr_1}{\\gr_1+\\gr_4}{t}{}\\q\n\\\\ &&\\label{Pz5}\n\\Pz{}= \\PP + (1-\\gr_4){t}{}\\,,\n\\\\ &&\\label{PP=}\n\\PP =( 1-\\gr_1-\\gr_4)(p{}_1 +p_2) - (1-\\gr_3) (p_3+p_2)\\,, \\eee\n the integral over $\\II$ is denoted as\n \\begin{equation}\\label{dGamma}\n \\int d\\Gamma=\\int_0^1 d\\T\\int d^3 \\xi_+\\, \\delta\\left(1-\\sum_{i=1}^3 \\xi_i\\right)\n \\int d^4 \\rho_+ \\, \\delta\\left(1-\\sum_{j=1}^4 \\rho_j\\right)\\,.\n \\end{equation}\n\n\n\nEqs.~\\eq{RRwB3modH+}-\\eq{W2C3gr1} transformed to the form \\eq{comexp}\nare collected in Section \\ref{uniform}, Eqs.~\\eqref{F1}-\\eqref{F4}.\n\n\n\n\\item IV. Elimination of $\\gd$-functions.\\\\\nUsing partial integration and the Schouten identity we eliminate\nthe all factors of $\\gd(\\gr_i)$,\n $\\gd(\\gx_{1 })$ and $\\gd(\\gx_{ 2})$ from Eqs.~\\eqref{F1}-\\eqref{F4}. The result is presented in Section \\ref{Eli0}, Eqs.~\\eqref{rightsideUUNI==}-\\eq{FRest3}.\n\n\\item V. Final step.\\\\ Finally, we show in Section \\ref{proof} that a sum of\nthe \\rhss of Eqs.~\\eqref{FRest1}-\\eqref{FRest3}\n\n is $Z$-independent\n up to $\\Hp$.\n\\end{itemize}\n\nBy collecting all resulting $Z$-independent terms we finally\n obtain the manifest expression\n for vertex $\\Upsilon^{\\eta\\eta}_{ \\go CCC}$, being a sum of expressions \\eq{go B3modHcoh}-\\eq{ERRGTC}.\n\n\n\\section{Main result $\\Upsilon^{\\eta\\eta}_{\\go CCC}$}\n\\label{Main}\n\nHere\nthe final manifestly $Z$-independent $\\go CCC$ contribution to the equations is presented.\n\nVertex $\\Upsilon^{\\eta\\eta}_{\\go CCC}$ is\n \\be\\label{upsrES}\n\\Upsilon^{\\eta\\eta}_{\\go CCC}=\\sum_{j=1}^{11} J_j\\,\n\\ee\nwith $J_i$ given in Eqs.~\\eq{go B3modHcoh}-\\eq{ERRGTC}.\nNote that the integration\nregions may differ for different terms $J_j$\nin the vertex, depending on their genesis.\n\n\n\nFirstly we note that $B^{\\eta\\eta}_3$ \\eqref{B3modH=1406}, that contains a $Z$-independent part, generates cohomologies both from $\\go*B^{\\eta\\eta}_3$ and from $\\dr_x B^{\\eta\\eta}_3$,\n\\begin{equation}\\label{go B3modHcoh}\nJ_1= - \\ff{ \\eta^2 }{4 } \\int d\\Gamma\\, \\delta(\\xi_3)\\ff{\\gr_2}{(\\gr_2+\\gr_1 )(\\gr_2+\\gr_3)} \\gd(\\gr_4) E\\, \\go CCC,\n\\end{equation}\n \\begin{equation}\\label{dx B3modHcoh}\nJ_2= \\ff{ \\eta^2 }{4 } \\int d\\Gamma\\, \\delta(\\xi_3)\\ff{\\gr_2}{(\\gr_2+\\gr_4 )\n(\\gr_2+\\gr_3)} \\gd(\\gr_1) E\\, \\go CCC\\,.\n\\end{equation}\nRecall that $E$ and $d\\Gamma$ are defined in \\eq{Egx=} and \\eq{dGamma}, respectively.\n(Note, that, here and below, the integrands on the \\rhss of expressions for $J_i$ are $\\T$-independent, hence the factor of $\\int_0^1 d\\T$ in $d\\Gamma$ equals one.)\n\n\n\nOther\ncohomology terms are collected from \\eqref{FRest1}, \\eqref{FRest2}, \\eqref{FRest3},\n\\eq{lostcohomo}, \\eq{lostcohomo2}, \\eqref{D4}, \\eqref{D6}, \\eqref{D7}\nand \\eqref{ERRGT}, respectively,\n\n\\begin{multline}\\label{Result2}\nJ_3= -\\ff{i \\eta^2 }{4 } \\int d\\Gamma\\, \\delta(\\xi_3) \\ff{1}{(\\gr_2 +\\gr_3)(1-\\gr_3) }\n\\Big\\{\\gr_2 {t}{}^\\ga (p_1+p_2 ){} _\\ga \\big[\\overrightarrow{\\p}_{\\gr_2}-\\overrightarrow{\\p}_{\\gr_3}\\big] \\\\\n + \\gr_2 ( p_1{}+ p_2)^{\\ga} ( p_3{}+p_2)_{\\ga}\n \\big[\\overrightarrow{\\p}_{\\gr_4}-\\overrightarrow{\\p}_{\\gr_1}\\big] +\\gr_2 {t}{}^\\ga ( p_3{} +p_2{} )_\\ga \\big[\\overrightarrow{\\p}_{\\gr_2}-\\overrightarrow{\\p}_{\\gr_1}\\big]+\\ff{ \\gr_1+\\gr_4}{ (1-\\gr_3) } {t}{}^\\ga (p_1+p_2 ){} _\\ga\n \\Big\\}E\\, \\go C C C\\,,\n\\end{multline}\n\n\n\\begin{multline}\\label{Result3}\nJ_4=\\frac{i\\eta^2}{4}\\int d\\Gamma\\, \\frac{\\delta(\\xi_3)}{1-\\rho_3}\\Big(\n \n - \\ff{ \\gr_3}{(1-\\gr_1-\\gr_4 )^2(1-\\gr_3)}\n {t}{}^\\gga y_\\gga\n - \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)}\n {t}{}^\\gga y_\\gga\n [-\\overrightarrow{\\p}_{\\gr_1}+\\overrightarrow{\\p}_{\\gr_2}] \\\\\n - \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)}\n ( p_1{}+ p_2)^{\\gga} (y+\\tilde{t}{}) _{\\gga}\n [ \\overrightarrow{\\p}_{\\gr_4}-\\overrightarrow{\\p}_{\\gr_1}]\n \\Big)E\\go C C C\\,,\n\\end{multline}\n\n\\begin{multline}\\label{Result4}\nJ_5=-i \\ff{ \\eta^2 }{4 } \\int d\\Gamma\\,\\delta(\\xi_3)\\Big[1+ \\gx_1(\\overrightarrow{\\p}_{\\gx_1}-\\overrightarrow{\\p}_{\\gx_2})\\Big]\n \\Big\\{\n \\ff{ -\\gr_2 }{(1-\\gr_1-\\gr_4 )^2(1-\\gr_3) ( \\gr_1+\\gr_4 ) }\n (p_3{}^{\\ga}+p_2{}^{\\ga})^\\gga {t}{}_{\\gga} \\\\\n- \\ff{ \\gr_3 }{(1-\\gr_1-\\gr_4 )^2(1-\\gr_3 )^2}\\,\\, {t}{}^{\\ga} y_{\\ga}\n + \\ff{ 1}{(\\gr_2 +\\gr_3)(1-\\gr_3) ( \\gr_1+\\gr_4 ) } (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n \\Big\\} E\\, \\go C C C\\,,\n\\end{multline}\n\n\\begin{equation}\\label{Result5}\nJ_6=i\\ff{ \\eta^2 }{4 } \\int d\\Gamma\\,\\delta(\\xi_3) \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2(\\gr_1+\\gr_4)}\n ( p_1{}+ p_2)^{\\gga} ( {t}{}) _{\\gga}E \\go C C C\\,,\n\\end{equation\n\\begin{multline}\\label{Result6}\nJ_7=- \\frac{\\eta^2}{4}\\int d\\Gamma\\, \\delta(\\xi_3)\\, \\xi_1\n\\ff{ \\gr_2\\gr_2}{(\\gr_2 +\\gr_3)^3(1-\\gr_3)^3( \\gr_1+\\gr_4 ) }\\times\\\\\n\\times \\big( y+ (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} )+ (1 -\\gr_4 ){t}{} \\big)^\\gga\n\\big( y + \\tilde{t}{} \\big)_{\\gga}\n{t}{}^{\\ga} y_\\ga E \\, \\go CCC\\,,\n\\end{multline}\n\\begin{equation}\\label{Result1}\nJ_8=- \\ff{ \\eta^2 }{4 }\\int d\\Gamma \\, \\delta(\\rho_3)\n \\Big( \\gr_1\\gd(\\gx_3 )\n + \\Big[ i {\\gd(\\gr_4)} -( p_2{}_\\ga+ p_1{}_\\ga) {t}{}^{\\ga}\\Big]\n \\Big\\{ i \\gd(\\gx_3 )+\n \\tilde{t}{}^{\\gga} y_\\gga\\Big\\}\\Big)E\\, \\go CCC\\,,\n\\end{equation}\n\n\n\\begin{multline}\\label{goB3modH1406gr1C}\nJ_9= i\\eta^2\\chalf \\int d\\Gamma\\, \\delta(\\rho_1)\\delta(\\rho_4)\\delta(\\xi_3)\\exp \\Big\\{ -i\\gx_2 ( p_1+p_2+{t} - \\gr_2 (p_3+p_2))_{\\ga} (y )^{\\ga} \\\\\n-\\gx_1 ( y+ p_1+p_2 - \\gr_2 (p_3+p_2))_\\gga ( {t})^\\gga\n + ( 1-\\gr_2) (p_3+p_2) {}^\\gga y_\\gga\n + p_3{}_\\gga y^\\gga +{t}{}^\\gb p_1{}_\\gb \\Big\\}\\go CCC\\,,\n\\end{multline}\n\n\n\\begin{multline}\\label{dxB3modH1406gr1C}\nJ_{10}=-i\\eta^2\\chalf \\int d\\Gamma\\, \\delta(\\rho_4) \\gd(\\gx_1 )\\gd(\\gr_1)\n \\, \\exp i\\Big\\{-\\gx_2 ( y+ p_1+p_2+{t} - \\gr_2 (p_3+p_2))_{\\ga} (y )^{\\ga} \\\\\n+ ( 1-\\gr_2) (p_3+p_2) {}^\\gga y_\\gga +p_3{}_\\gga y^\\gga +{t}{}^\\gb p_1{}_\\gb \\Big\\}\\go CCC\\,,\n\\end{multline}\n\n\n\\begin{multline}\\label{ERRGTC}\nJ_{11}=\\frac{i \\eta^2}{4}\\int d\\Gamma \\, \\delta(\\rho_1)\\delta(\\rho_4) y^\\ga {t} {}_\\ga \\exp i\\Big\\{ (y+\\PP_0 +{t}){}^\\gga ( \\gx_1 {t}- \\gx_2 y)_\\gga + ( 1-\\gr_2) (p_3+p_2) {}^\\gga y_\\gga\n \\\\\n + p_3{}_\\gga y^\\gga +{t}{}^\\gb p_1{}_\\gb \\Big\\}\\go CCC\\,.\n\\end{multline}\n\n\nLet us emphasize, that neither exponential function $E$ \\eq{Egx=}\nnor the exponentials on the \\rhss of Eqs.~\\eq{goB3modH1406gr1C}-\\eq{ERRGTC}\ncontain $\\p_i{}_\\ga \\p_k{}^\\ga$ terms.\nHence, as anticipated, all $J_j$ are spin-local.\n\n\n One can see that though having poles in pre-exponentials these expressions are well defined.\n\\\\For instance a potentially dangerous factor on the \\rhs of \\eq{go B3modHcoh}\n is dominated by 1 as follows from the inequality\n$ {\\gr_2}-(\\gr_1+\\gr_2 )\n(\\gr_2+\\gr_3) =-\\gr_3\\gr_1\\le 0$\\, that holds due to the factor\nof $\\prod\\vartheta(\\gr_i)\\gd(1-\\sum\\gr_i )\\gd(\\gr_4)$.\nAnalogous simple reasoning applies to the \\rhs of \\eq{dx B3modHcoh}.\n\nThe case of \\eq{Result2}-\\eq{Result6} is a bit more tricky.\nBy partial integration one obtains from \\eq{Result2}-\\eq{Result4}\n\\bee\\label{RRult4+} &&\n J_3+J_4+J_5 = \\ff{i \\eta^2 }{4 } \\int d\\Gamma\\,\n\\delta(\\xi_3)\\ff{1 }{(\\gr_2 +\\gr_3)(1-\\gr_3) }\\Big\\{\n-\n\\gd({\\gr_3}) {t}{}^\\ga (p_1+p_2 ){} _\\ga\n\\\\&&\\nn\n + [ \\gd({\\gr_4})-\\gd({\\gr_1})]\\gr_2\n ( p_1{}+ p_2)^{\\ga} ( p_3{}+p_2)_{\\ga}\n + {t}{}^\\ga ( p_3{} +p_2{} )_\\ga -\\gd({\\gr_1})\\gr_2 {t}{}^\\ga ( p_3{} +p_2{} )_\\ga\n \\,\n\\\\&&\\nn\n -\\gd({\\gr_1}) \\ff{ \\gr_2}{ (1-\\gr_3)}\n {t}{}^\\gga y_\\gga\n + [ \\gd({\\gr_4})-\\gd({\\gr_1})] \\ff{ \\gr_2}{ (1-\\gr_3)}\n ( p_1{}+ p_2)^{\\gga} (y+\\tilde{t}{}) _{\\gga}\n \\\\ &&\\nn\n - \\gd({\\gx_2})\n\n \\Big(\n \\ff{ -\\gr_2 }{(\\gr_2 +\\gr_3)( \\gr_1+\\gr_4 ) }\n (p_3{}^{\\ga}+p_2{}^{\\ga})^\\gga {t}{}_{\\gga}\n \\\\&&\\nn\n- \\ff{ \\gr_3 }{(\\gr_2 +\\gr_3) (1-\\gr_3 ) }\\,\\, {t}{}^{\\ga} y_{\\ga}\n + \\ff{ 1}{ ( \\gr_1+\\gr_4 ) } (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n\\Big) \\Big\\} E\\, \\go C C C\\,.\n\\eee\nUsing that, due to the factor of $\\gd(1-\\sum \\gr_i)$,\nfor positive $\\gr_i $ it holds\n\\bee&&\\label{nopoles}\n \\ff{\\gr_2}{(\\gr_3+\\gr_2)(1-\\gr_3)}-1 =-\\ff{\\gr_3(1-(\\gr_3+\\gr_2 ))}{(\\gr_3+\\gr_2)(1-\\gr_3)}\\le 0\n \\q\\\\ \\label{nopoles3} && \\ff{ 1}{(\\gr_2 +\\gr_3)(1-\\gr_3) }\\,\\le\n\\ff{ 1}{(\\gr_2 +\\gr_3)(1-\\gr_3-\\gr_2) }=\n\\ff{ 1}{ ( \\gr_3+\\gr_2) }+\\ff{ 1}{ ( \\gr_1+\\gr_4 ) }\\,,\n\\eee\none can make sure that each of the expressions with poles in the pre-exponential in Eqs.~\\eq{Result5}, \\eq{Result6} and\n \\eq{RRult4+}\ncan be represented in the form of a sum of integrals with integrable pre-exponentials.\nFor instance, the potentially dangerous\nfactor in \\eq{Result6}, by virtue of \\eq{nopoles} and \\eq{nopoles3} satisfies\n\\be \\ff{ \\gr_2\\gr_2}{(\\gr_2 +\\gr_3)^3(1-\\gr_3)^3( \\gr_1+\\gr_4 ) }\\le\n \\ff{ 1}{ (1-\\gr_3)( \\gr_1+\\gr_4 ) }+\n \\ff{1}{(\\gr_3+\\gr_2) }\n +\\ff{ 1}{ ( \\gr_1+\\gr_4 ) }\\,.\\quad\\label{xx}\n\\ee\n Each of the terms on the \\rhs of Eq.~(\\ref{xx}) is integrable, because integration\n is over a three-dimensional compact area $\\sum\\gr_i=1$ in the positive quadrant.\nFor instance consider the first term. Swopping $\\gr_4\\leftrightarrow\\gr_2$ one has\n \\bee\n\\int d^4 \\gr_+ \\gd(1-\\sum_1^4 \\gr_i)\\ff{1}{(1-\\gr_3 ) ( \\gr_1+\\gr_2)}=\n\\int d^3 \\gr_+ \\vartheta(1-\\sum_1^3 \\gr_i)\\ff{1}{(1-\\gr_3 ) ( \\gr_1+\\gr_2)}=\\\\ \\nn\n-\\int_0^1 d \\gr_1 \\int_0^{1-\\gr_1} d \\gr_2\n \\ff{\\log( \\gr_1+\\gr_2)}{ ( \\gr_1+\\gr_2)}=\\half\\int_0^1 d \\gr_1 \\log^2( \\gr_1 )\\,,\n \\eee\n which is integrable.\n\n\n Analogously\nother seemingly dangerous factors can be shown to be harmless as well.\n\n\n\\section{To $z$-linear pre-exponentials}\n \\label{zlinear}\nStep II of the calculation scheme of Section \\ref{Schema} is to transform \\rhss of Eqs.~\\eq{origW1B2}-\\eq{kuku5} to $Z$-independent terms plus terms with linear in $z$ pre-exponentials\n(modulo $H^+$).\n\nTo this end, from \\eq{B3modH=1406}\none straightforwardly obtains that\n \\begin{multline} \\label{RRwB3modH+}\n\\omega \\ast {B}_3^{\\eta\\eta}\\approx J_1+ \\frac{\\eta^2}{4}\\int d\\Gamma\n\\frac{\\gd(\\xi_3)\\gd(\\gr_4)}{(1-\\gr_1)(1-\\gr_3)}\\Bigg[-\\gr_2 (z_\\ga (y^\\ga+t^\\ga))(p_{1\\gb}+p_{2\\gb})(p_2 {}^\\gb+p_3 {}^\\gb) \\\\\n+i\\Big[\\Big(\\gd(\\gr_1)+\\gd(\\gr_3)\\Big)(1-\\gr_1)(1-\\gr_3)-\\gd(\\xi_2)\\Big]\n z_\\ga\\Big((1-\\gr_1)(p_1 {}^\\ga+p_2 {}^\\ga)-(1-\\gr_3)(p_2 {}^\\ga+p_3 {}^\\ga)\\Big) \\\\\n+iz_\\ga (p_1 {}^\\ga+p_2 {}^\\ga)(1-\\gr_1)\\Big(\\gd(\\xi_2)-\\gd(\\xi_1)\\Big)\\Bigg]\n\\exp\\Big\\{i\\T z_\\ga\\big(y^\\ga+t^\\ga+(1-\\gr_1)(p_1 {}^\\ga+p_2 {}^\\ga)\n-(1-\\gr_3)(p_2 {}^\\ga+p_3 {}^\\ga)\\big) \\\\\n+\\frac{i(1-\\xi_1) \\gr_2}{\\gr_1+\\gr_2}(y^\\ga+t^\\ga) (p_{1\\ga}+p_{2\\ga})\n+\\frac{i\\xi_1 \\gr_2}{\\gr_2+\\gr_3}(y^\\ga+t^\\ga) (p_{2\\ga}+p_{3\\ga})-i(y^\\ga+t^\\ga) p_{2\\ga}\\Big\\}\\go CCC\\q\n\\end{multline}\nwhere $J_1$ is the cohomology term \\eq{go B3modHcoh}.\nAnalogously, \\begin{multline} \\label{RRdxB3modH+}\n\\dr_x {B}_3^{\\eta\\eta} \\approx J_2-\\frac{\\eta^2}{4}\\int d\\Gamma\n\\frac{\\gd(\\xi_3)\\gd(\\gr_4)}{(1-\\gr_1)(1-\\gr_3)}\n\\Bigg[-\\gr_2 (z_\\ga y^\\ga)(p_{1\\gb}+t_\\gb+p_{2\\gb})(p_2 {}^\\gb+p_3 {}^\\gb) \\\\\n+i\\Big[\\Big(\\gd(\\gr_1)+\\gd(\\gr_3)\\Big)(1-\\gr_1)(1-\\gr_3)-\\gd(\\xi_2)\\Big]\nz_\\ga\\Big((1-\\gr_1)(p_1 {}^\\ga+t^\\ga+p_2 {}^\\ga)-(1-\\gr_3)(p_2 {}^\\ga+p_3 {}^\\ga)\\Big) \\\\\n+iz_\\ga (p_1 {}^\\ga+t^\\ga+p_2 {}^\\ga)(1-\\gr_1)\\Big(\\gd(\\xi_2)-\\gd(\\xi_1)\\Big)\\Bigg]\n\\exp\\Big\\{i\\T z_\\ga\\big(y^\\ga+(1-\\gr_1)(p_1 {}^\\ga+t^\\ga+p_2 {}^\\ga)-(1-\\gr_3)(p_2 {}^\\ga+p_3 {}^\\ga)\\big)\n\\\\\n+\\frac{i(1-\\xi_1) \\gr_2}{\\gr_1+\\gr_2}y^\\ga (p_{1\\ga}+t_\\ga+p_{2\\ga})\n+\\frac{i\\xi_1 \\gr_2}{\\gr_2+\\gr_3}y^\\ga (p_{2\\ga}+p_{3\\ga})-iy^\\ga p_{2\\ga}+it^\\gb p_{1\\gb}\\Big\\}\\go CCC\n\\end{multline}\nwith $J_2$ \\eq{dx B3modHcoh}.\n\nUsing the Schouten identity and partial integration one obtains from Eqs.~\\eq{origW1B2}-\\eq{FFFFFFFFk}, respectively,\n \\begin{multline}\\label{RW1B2BP=}\nW_{1 \\, \\go C}^\\eta \\ast B_2^\\eta\\approx \\frac{\\eta^2}{4}\\int_0^1 d\\T\n\\int_0^1 d\\tau\\int_0^1 d\\gs_1 \\int_0^1 d\\gs_2\\Bigg[i(z_\\ga t^\\ga)\\gd(1-\\tau) \\\\\n+\\frac{z_\\ga(p_2 {}^\\ga+p_3 {}^\\ga)}{1-\\gt}\\Big(i\\big(\\gd(\\gs_1)-\\gd(1-\\gs_1)\\big)\n-\\big[y^\\ga+p_1 {}^\\ga +p_2 {}^\\ga-\\gs_2(p_2{}^\\ga+p_3 {}^\\ga)\\big]t_\\ga\\Big)\\Bigg]\\exp\\Big\\{i\\T z_\\ga y^\\ga \\\\\n+i\\T z_\\ga\\Big(\\tau(p_1 {}^\\ga +p_2 {}^\\ga)-((1-\\tau)+\\gs_2\\tau)(p_2 {}^\\ga +p_3 {}^\\ga)\n+\\big(\\gs_1+\\tau(1-\\gs_1)\\big)t^\\ga\\Big)+it^\\ga p_{1\\ga} \\\\\n+i\\gs_1\\big[y^\\ga+p_1 {}^\\ga +p_2 {}^\\ga-\\gs_2(p_2{}^\\ga+p_3 {}^\\ga)\\big]t_\\ga\n-i\\Big(\\gs_2 p_3 {}^\\ga-(1-\\gs_2)p_2 {}^\\ga\\Big)y_\\ga\\Big\\}\\go CCC\\,,\n\\end{multline}\n\\begin{multline}\\label{W2C3gr1}\nW_{2\\, \\go CC}^{\\eta\\eta}\\ast C\\approx -\\frac{i\\eta^2}{4}\n\\int d\\Gamma\\, \\gd(\\xi_3)\\gd(\\gr_3)\\frac{(z_\\gga t^\\gga)}{\\gr_1+\\gr_4}\n\\Big[-\\gr_1\\big( \\gd(\\gr_4)+i t^\\ga(p_{1\\ga}+p_{2\\ga})\\big)+\\xi_1\\gd(\\xi_2)\\Big]\\times\\\\\n\\times \\exp\\Big\\{i\\T z_\\ga y^\\ga+i\\T z_\\ga\n\\Big((1-\\gr_1-\\gr_4)(p_1 {}^\\ga+p_2 {}^\\ga)-(1-\\gr_3)(p_2{}^\\ga+p_3 {}^\\ga)+(1-\\gr_4)t^\\ga\\Big) \\\\\n+iy^\\ga\\left(\\frac{\\xi_1 \\gr_1}{1-\\gr_2}t_\\ga+p_{3\\ga}\\right)\n+i\\left(1-\\gr_1-\\frac{\\xi_1 \\gr_1\\gr_2}{1-\\gr_2}\\right)t^\\ga p_{1\\ga}-i(1-\\xi_1)\\gr_1 t^\\ga p_{2\\ga}\n+i\\frac{\\xi_1 \\gr_1}{1-\\gr_2}t^\\ga p_{3\\ga} \\Big\\}\\go CCC\\,,\n\\end{multline}\n\\begin{multline}\\label{FFFFFFFFk=}\n\\dr_x B_2^\\eta\\approx\\frac{i\\eta^2}{4} \\int d\\Gamma\\, \\gd(\\xi_3)\\gd(\\gr_4)\\, (z_\\ga y^\\ga)\n\\Big[it^\\gga(p_{1\\gga}+p_{2\\gga})+\\gd(\\gr_4)- \\gd(\\gr_1) \\Big]\\times\\\\\n\\times \\exp\\Big\\{i\\T z_\\ga y^\\ga\n+i\\T z_\\ga\\big((1-\\gr_1-\\gr_4)(p_1 {}^\\ga+ p_2 {}^\\ga)-(1-\\gr_3)(p_2 {}^\\ga +p_3 {}^\\ga)+(1-\\gr_4)t^\\ga\\big) \\\\\n+i(1-\\gr_2)t^\\gb p_{1\\gb}-i\\gr_2 t^\\gb p_{2\\gb}\n+i\\xi_2 y^\\ga \\Big((\\gr_1+\\gr_2)t_\\ga+\\gr_2 p_{1\\ga}-(1-\\gr_2)p_{2\\ga}-p_{3\\ga}\\Big)\n+iy^\\ga p_{3\\ga} \\Big\\}\\go CCC.\n\\end{multline}\n\n\n\\section{Generalised Triangle identity}\n\\label{SecGTid}\n\n\n Here a useful identity playing the key role in our computations is introduced.\n\n For any $F(x,y)$\nconsider\n\\bee\\label{GTH+F}\n &&I= \\int_{[0,1]} {d\n\\gt\\,}\\int d^3\n\\gx_+\n \\gd(1-\\gx_1-\\gx_2-\\gx_3 ) \\\\ \\nn&&\n z^\\gga \\Big[ (a_2-a_1)_\\gga \\gd(\\gx_3)+ (a_3-a_2)_\\gga \\gd(\\gx_1)\n+ (a_1-a_3)_\\gga \\gd(\\gx_2)\\Big] F \\big(\n \\gt z_\\gb P^\\gb\\,, ( -\\gx_1 a_1-\\gx_2 a_2-\\gx_3 a_3)_\\ga P^\\ga \\big)\\,\n\\eee\n with arbitrary $\\gt, \\gx$- independent $P$ and $a_i$.\n\nLet $G(x,y)$ be a solution to differential equation\n\\be\\label{difvim}\n\\ff{\\p}{\\p x} G(x,y)= \\ff{\\p}{\\p y}F (x,y)\\,. \\ee\nHence\n\\bee\\label{GTHF0}\n&& I = \\int_{[0,1]} {d\n\\gt\\,}\\int d^3\n\\gx_+ \\gd(1-\\gx_1-\\gx_2-\\gx_3 ) \\\\ \\nn&&\n (a_1-a_3)^\\ga(a_3-a_2)_\\ga\n \\overrightarrow{\\p}_\\gt G \\big(\n \\gt z_\\gb P^\\gb \\,, (-\\gx_1 a_1-\\gx_2 a_2-\\gx_3 a_3)_\\ga P^\\ga \\big). \\eee\nNote that there is a factor of $(a_1-a_3)^\\ga(a_3-a_2)_\\ga$ equal to the area\nof triangle spanned\nby the vectors $a_1\\,,a_2\\,, a_3$ on the \\rhs of \\eq{GTHF0}.\n\nThis identity is closely related to identity (3.24) of \\cite{4a1}, that, in turn, expresses\n{\\it triangle identity} of \\cite{Vasiliev:1989xz}.\nHence, \\eq{GTHF0} will be referred to as\n{\\it Generalised Triangle identity} or {\\it GT identity}.\n\nNote that,\n for appropriate $G$ partial integration on the \\rhs of \\eq{GTHF0}\n in $\\gt$ gives $z$-independent (cohomology) term plus $\\mathcal{H} ^+$-term. Namely,\n\\bee\\label{GTHF0pi}\n&& I = - \\int d^3\n\\gx_+ \\gd(1-\\gx_1-\\gx_2-\\gx_3 ) \\\\ \\nn&&\n (a_1-a_3)^\\ga(a_3-a_2)_\\ga\n G \\big(\n 0\\,, (-\\gx_1 a_1-\\gx_2 a_2-\\gx_3 a_3)_\\ga P^\\ga \\big)\n \\\\ \\nn&&+ \\int d^3_+\n\\gx \\gd(1-\\gx_1-\\gx_2-\\gx_3 ) \\\\ \\nn&&\n (a_1-a_3)^\\ga(a_3-a_2)_\\ga\n G \\big(\n z_\\gb P^\\gb \\,, (-\\gx_1 a_1-\\gx_2 a_2-\\gx_3 a_3)_\\ga P^\\ga \\big)\n . \\eee\nThe second term on the \\rhs belongs to $\\Hp$ if $G$ is of the form \\eq{class} satisfying \\eq{limit}.\n\n\n\n\n\n\nTo prove GT identity let us perform\n partial integration on the \\rhs of \\eq{GTH+F} with respect to $\\gx_i$. This yields\n\\bee\\label{GTH+F=}\n && I= \\int_{[0,1]} {d\n\\gt\\,}\\int {d^3\n\\gx_+\\,} \\gd(1-\\gx_1-\\gx_2-\\gx_3 ) \\\\ \\nn&&\n \\Big[\n z^\\gga (a_3-a_2)_\\gga P^\\ga a_1{}_\\ga\n+z^\\gga (a_1-a_3){}_\\gga P^\\ga a_2{}_\\ga\n+z^\\gga (a_2-a_1){}_\\gga P^\\ga a_3{}_\\ga\n\\Big]\\times\\\\ \\nn&& \\ff{\\p}{\\p y} F \\big(\n \\gt z_\\ga P^\\ga \\,,\\,\\,-(\\gx_1 a_1+\\gx_2 a_2+\\gx_3 a_3)_\\ga P^\\ga \\big)\\,. \\eee\nThe Schouten identity yields\n\\bee\n \\Big[\nz^\\gga a_1{}_\\gga P^\\ga(a_3-a_2)_\\ga\n+z^\\gga a_2{}_\\gga P^\\ga(a_1-a_3)_\\ga\n+z^\\gga a_3{}_\\gga P^\\ga(a_2-a_1)_\\ga\n\\Big]\n=\\\\ \\nn\n\\Big[z^\\gga P_\\gga \\big\\{\n a_1{}^\\ga(a_3-a_2)_\\ga\n+ a_2^\\ga(a_1-a_3)_\\ga\n+ a_3^\\ga(a_2-a_1)_\\ga\\big\\}\n\\\\ \\nn\n+z^\\gga (a_3-a_2)_\\gga P^\\ga a_1{}_\\ga\n+z^\\gga (a_1-a_3){}_\\gga P^\\ga a_2{}_\\ga\n+z^\\gga (a_2-a_1){}_\\gga P^\\ga a_3{}_\\ga\n\\Big].\n\\eee\nOne can observe that\n\\bee \\Big[z^\\gga (a_3-a_2)_\\gga P^\\ga a_1{}_\\ga\n+z^\\gga (a_1-a_3){}_\\gga P^\\ga a_2{}_\\ga\n+z^\\gga (a_2-a_1){}_\\gga P^\\ga a_3{}_\\ga\n\\Big]=\\\\ \\nn\n- \\Big[\nz^\\gga a_1{}_\\gga P^\\ga(a_3-a_2)_\\ga\n+z^\\gga a_2{}_\\gga P^\\ga(a_1-a_3)_\\ga\n+z^\\gga a_3{}_\\gga P^\\ga(a_2-a_1)_\\ga\n\\Big]\\q\n\\eee\n whence it follows \\eq{GTHF0}.\n\nA useful particular case of GT identity is that with $F(x,y) =f(x+y)$, namely\n \\bee\\label{GTH+==0}\n && \\int_{[0,1]} {d\n\\gt\\,}\\int {d^3\n\\gx_+\\,} \\gd(1-\\gx_1-\\gx_2-\\gx_3 ) z^\\gga \\Big[ (a_2-a_1)_\\gga \\gd(\\gx_3) \\\\ \\nn&&\n+ (a_3-a_2)_\\gga \\gd(\\gx_1)\n+ (a_1-a_3)_\\gga \\gd(\\gx_2)\\Big] f\\big(\n (\\gt z-\\gx_1 a_1-\\gx_2 a_2-\\gx_3 a_3)_\\ga P^\\ga \\big) \\quad\\\\ \\nn\n&& = - \\int_{[0,1]} {d \\gt\\,}\\int {d^3\n\\gx_+\\,} \\gd(1-\\gx_1-\\gx_2-\\gx_3 ) \\\\ \\nn&&\n (a_1-a_3)^\\ga(a_3-a_2)_\\ga\n \\overrightarrow{\\p}_\\gt f \\big(\n (\\gt z-\\gx_1 a_1-\\gx_2 a_2-\\gx_3 a_3)_\\ga P^\\ga \\big) \\,. \\eee\n\n\n\n\n\n \\section{Uniformization}\n\\label{uniform}\n\n\nStep III of Section \\ref{Schema} is to uniformize the \\rhs's of\n Eqs.~\\eq{RRwB3modH+}-\\eq{FFFFFFFFk=} putting them into the form \\eq{comexp}, where GT identity \\eq{GTH+F} plays an important role.\nDetails of uniformization are given in Appendix B\n (p. \\pageref{Auniform}).\n\n\n\n As a result, Eq.~\\eq{rightsideU} yields\n \\begin{equation} \\label{rightsideUUNI=}\n\\widehat{\\Upsilon}^{\\eta\\eta}_{\\go CCC}\\Big|_{\\text{mod}\\, cohomology}\\approx\n\\sum_{j=1}^4 F_j\n\\end{equation}\nwith $F_j$ presented in \\eq{F1}-\\eq{F4}.\n\n Note that different terms of $F_j$ will be considered separately in what is follows.\n For the future convenience the underbraced terms are re-numerated,\n being denoted as $F_{j,k}$, where $j$ refers to $F_j$ while $k$ refers to the\n respective underbraced term in the expression for $F_j$.\nFor instance, $F_1=F_{1,1}+F_{1,2}+F_{1,3}+F_{1,4}$, {\\it{etc}}.\n\n\\begin{multline}\\label{F1}\n-\\go\\ast B_3^{\\eta\\eta}\\Big|_{mod\\, \\delta(\\rho_1)\\&\\delta(\\T)}\\approx F_1 :=-\\frac{\\eta^2}{4}\\int d\\Gamma\\, \\frac{\\delta(\\xi_3)\\delta(\\rho_4)}{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\\Big[\\underbrace{\\rho_2 (z_\\beta \\PP^\\beta)(p_{1\\ga}+p_{2\\ga})(p_2 {}^\\ga+p_3 {}^\\ga)}_1 \\\\\n+\\underbrace{ i\\delta(\\rho_3)(1-\\rho_1-\\rho_4)(1-\\rho_3) (z_\\ga \\PP^\\ga)}_2 +\\underbrace{-i\\xi_1\\delta(\\xi_2)(z_\\ga \\PP^\\ga)}_3 \\\\\n+\\underbrace{i(1-\\rho_1-\\rho_4)z_\\ga(p_1 {}^\\ga+p_2 {}^\\ga)\\Big(\\delta(\\xi_2)-\\delta(\\xi_1)\\Big)}_4\\Big]\n\\mathcal{E}\\go CCC\n\\,,\\end{multline}\n\n\\begin{multline}\\label{F2}\n-\\dr_x B^{\\eta\\eta}_3\\Big| _{mod\\, \\delta(\\rho_1)\\&\\delta(\\T)}\\approx F_2 :=+\\frac{\\eta^2}{4}\\int d\\Gamma\\, \\frac{\\delta(\\xi_3)\\delta(\\rho_1)}{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\\Big[\\underbrace{\\rho_2 (z_\\beta \\PP^\\beta)(p_{1\\ga}+p_{2\\ga})(p_2 {}^\\ga+p_3 {}^\\ga)}_1 \\\\\n+\\underbrace{ \\rho_2(1-\\rho_4) (z_\\beta t^\\beta)t_\\ga(p_2 {}^\\ga+p_3 {}^\\ga)}_2 +\\underbrace{ \\rho_2(1-\\rho_4) (z_\\beta t^\\beta)(p_{1\\ga}+p_{2\\ga})(p_2 {}^\\ga+p_3 {}^\\ga)}_3+\\underbrace{\\rho_2 (z_\\beta \\PP^\\beta)t_\\ga(p_2 {}^\\ga+p_3 {}^\\ga)}_4 \\\\\n+\\underbrace{ i\\delta(\\rho_3)(1-\\rho_1-\\rho_4)(1-\\rho_3)(z_\\ga \\Pz^\\ga)}_5+\n \\underbrace{-i\\xi_1\\delta(\\xi_2)(z_\\ga \\PP^\\ga)}_6+\\underbrace{-i\\xi_1\\delta(\\xi_2)(1-\\rho_4)(z_\\ga t^\\ga)}_7 \\\\\n+\\underbrace{ i(1-\\rho_1-\\rho_4)z_\\ga(p_1 {}^\\ga+p_2 {}^\\ga)\n\\Big(\\delta(\\xi_2)-\\delta(\\xi_1)\\Big)}_8 +\n\\underbrace{ i(1-\\rho_1-\\rho_4)z_\\ga t^\\ga\\Big(\\delta(\\xi_2)-\\delta(\\xi_1)\\Big)}_9\\Big]\\mathcal{E}\\go CCC\n\\,,\\end{multline}\n\n\n\n\n\\begin{multline}\\label{F3}\n-\\dr_xB_2^\\eta-W_{2\\, \\go CC}^{\\eta\\eta}\\ast C\\Big|_{mod\\, \\delta(\\T)} \\approx\nF_3:=-\\frac{\\eta^2}{4}\\int d\\Gamma\\delta(\\rho_3)\\delta(\\xi_3)\\Bigg[\n\\underbrace{ i\\delta(\\rho_1)(z_\\ga \\Pz^\\ga)}_1+\n\\underbrace{-\\frac{i(z_\\ga t^\\ga)\\, \\xi_1\\delta(\\xi_2)}{\\rho_1+\\rho_4}}_2 \\\\\n+\\underbrace{ t^\\ga(p_{1\\ga}+p_{2\\ga})z_\\gga\\PP^\\gga}_3\n+\\underbrace{ i\\delta(\\rho_4)z_\\ga (-\\PP^\\ga)}_4 +\n\\underbrace{ t^\\gga(p_{1\\gga}+p_{2\\gga})z_\\ga t^\\ga\\left((1-\\rho_4)\n-\\frac{\\rho_1}{\\rho_1+\\rho_4}\\right)}_5\\Bigg]\\mathcal{E}\\, \\go CCC\n\\,,\\end{multline}\n\n\n\\begin{multline}\\label{F4}\n-(\\dr_x B_3^{\\eta\\eta}+\\go \\ast B_3^{\\eta\\eta})\\Big|_{\\delta(\\rho_1)}\n\\Big|_{mod\\, \\delta(\\T)}-W_{1\\, \\go C}^{\\eta}\\ast B_2^{\\eta\\, loc}\\approx\nF_4:=-\\frac{\\eta^2}{4}\\int d\\Gamma\\, \\frac{\\delta(\\xi_3)\\delta(\\xi_2)\\, z_\\ga(p_2 {}^\\ga+p_3 {}^\\ga)}\n{(\\rho_2+\\rho_3)(\\rho_1+\\rho_4)}\\times\\\\\n\\times \\left(\\underbrace{i\\Big(\\delta(\\rho_1)-\\delta(\\rho_4)\\Big)\\Ee}_1+\n\\underbrace{ i\\Ez\\left(\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_4}\\right)E}_2\\right) \\go CCC.\n\\end{multline}\n\n Note that\n\\be\nF_{1,2}+F_{3,4}=0,\n\\ee\n\\be\nF_{2,5}+F_{3,1}=0.\n\\ee\n\n\nLet us emphasise that, by virtue \\eq{EEgx14=}, each $F_j$ is of the form \\eq{comexp} as expected.\n\nNote that during uniformizing procedure the vertices\n\\eq{Result1} -\\eq{ERRGTC} are obtained in Appendix B (p. \\pageref{Auniform}).\n\n\n \\section{Eliminating $\\gd(\\gr_j)$ and $\\gd(\\gx_j)$. Result}\n\\label{Eli0}\n\nThe fourth step of Section \\ref{Schema} is to eliminate all\n$\\delta(\\rho_i)$\\,,\n$\\delta(\\xi_1)$ and $\\delta(\\xi_2)$ from the pre-exponentials on the \\rhss\nof Eqs.~\\eq{F1}-\\eq{F4}.\n\nMore precisely, using partial integration, the Schouten identity and\n{ Generalised Triangle identity} \\eq{GTHF0}, taking into account Eqs.~\\eq{tildet}-\\eq{PP=} one finds\n that Eq.~\\eq{rightsideUUNI=} yields\n\\begin{equation} \\label{rightsideUUNI==}\n\\big(\\widehat{\\Upsilon}^{\\eta\\eta}_{\\go CCC} - G_1-G_2-G_3\\big)\\big|_{\\ls\\mod cohomology }\\approx 0\n \\q\n\\end{equation}\nwhere\n\\begin{multline}\\label{FRest1}\nG_1 := J_3+\\frac{\\eta^2}{4}\n\\int d\\Gamma\\, \\delta(\\xi_3) z_\\gga\\Bigg\\{\n (y^\\gga+\\widetilde{t}^\\gga) \\frac{\\rho_2\\, t^\\ga (p_{1\\ga}+p_{2\\ga})}{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\\Ez\\Bigg[\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_3}\\Bigg]E \\\\\n+ (y^\\gga+\\widetilde{t}^\\gga) \\frac{\\rho_2\\, (p_1 {}^\\ga+p_2 {}^\\ga)(p_{2\\ga}+p_{3\\ga})}{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\\Ez \\Bigg[\\frac{\\p}{\\p \\rho_4}-\\frac{\\p}{\\p \\rho_1}\\Bigg]E \\\\\n+ (y^\\gga+\\tilde{t}^\\gga)\n\\frac{\\rho_2\\, t^\\ga(p_{2\\ga}+p_{3\\ga})}{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\n\\Ez\\Bigg[\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_1}\\Bigg]E\n+ (y^\\gga+\\tilde{t}^\\gga)\n\\frac{(\\rho_1+\\rho_4) t^\\ga (p_{1\\ga}+p_{2\\ga})}{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\\Ee \\\\\n+ (y^\\gga+\\tilde{t}^\\gga)\\frac{\\rho_3\\, t^\\ga (p_{2\\ga}+p_{3\\ga})}{(1-\\rho_1-\\rho_4)^2 (1-\\rho_3)}\n\\Ee\n+\\frac{\\rho_2\\, t^\\gga (p_2 {}^\\ga+p_3 {}^\\ga)(p_{1\\ga}+p_{2\\ga}\n+t_\\ga-\\tilde{t}_\\ga)}{(1-\\rho_1-\\rho_4)(1-\\rho_3)(\\rho_1+\\rho_4)}\\Ee\\Bigg\\}\\go CCC\n\\q\\end{multline}\n \\begin{multline}\\label{FRest2}\nG_2 := J_4\n +\\frac{\\eta^2}{4}\\int d\\Gamma\\, \\frac{\\delta(\\xi_3)}{1-\\rho_3}\\,z^\\ga\n \\Bigg\\{ \\frac{\\rho_3 (y_\\ga+\\tilde{t}_\\ga)t^\\gga(y_\\gga+\\Pz_\\gga) }{(1-\\rho_1-\\rho_4)^2(1-\\rho_3)}\n \\Ee \\\\\n-\\frac{\\rho_2\\rho_4\\, t_\\ga (y^\\gga+\\Pz^\\gga)t_\\gga }\n{(1-\\rho_1-\\rho_4)(1-\\rho_3)(\\rho_1+\\rho_4)^2}\\Ee-\\frac{\\rho_2\\,\n (y_\\ga+\\tilde{t}_\\ga) t^\\gga(p_{1\\gga}+p_{2\\gga}) }{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\\Ee \\\\\n-\\frac{\\rho_2\\, (p_1 {}_\\ga +p_2 {}_\\ga )(y^\\gga+\\Pz^\\gga)t_\\gga}{(1-\\rho_1-\\rho_4)(\\rho_1+\\rho_4)(1-\\rho_3)}\n \\Ee\n+\\Ez\\frac{\\rho_2\\, t^\\gga (y_\\gga+\\Pz_\\gga)(y_\\ga+\\tilde{t}_\\ga)\n }\n{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\\Bigg[ \\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_2}\\Bigg]E \\\\\n+\\Ez \\frac{\\rho_2\\, (y_\\ga+\\tilde{t}_\\ga) (p_1 {}^\\gga+p_2 {}^\\gga)(y_\\gga+\\Pz_\\gga)\n }{(1-\\rho_1-\\rho_4)(1-\\rho_3)}\\Bigg[\\frac{\\p}{\\p \\rho_1}\n-\\frac{\\p}{\\p \\rho_4}\\Bigg]E\\Bigg\\}\\go CCC\\q\n\\end{multline}\n \\begin{multline}\\label{FRest3}\nG_{ 3\n := J_5 + \\frac{\\eta^2}{4}\n\\int d\\Gamma\\, \\delta(\\xi_3) \\Bigg(1+\\xi_1\\Bigg[\\frac{\\p}{\\p \\xi_1}\n-\\frac{\\p}{\\p \\xi_2}\\Bigg]\\Bigg)\\times\\\\\\times\n z_\\ga\n\\Bigg\\{\\frac{\\rho_2\\, t^\\ga(p_2 {}^\\gga+p_3 {}^\\gga)(y_\\gga+\\tilde{t}_\\gga)}\n{(1-\\rho_1-\\rho_4)^2 (1-\\rho_3)(\\rho_1+\\rho_4)}\n + \\frac{-\\rho_2\\, t^\\ga (\\tilde{t}^\\gga+y^\\gga)(y_\\gga+\\Pz_\\gga)\n }{(1-\\rho_1-\\rho_4)^2 (1-\\rho_3)^2 (\\rho_1+\\rho_4)}\n\\\\+\\frac{-\\rho_3\\, (y^\\ga+\\tilde{t}^\\ga) (t^\\gga y_\\gga)}\n{(1-\\rho_1-\\rho_4)^2 (1-\\rho_3)^2}+\\frac{ (y^\\ga+\\tilde{t}^\\ga)\n(p_1 {}^\\gga+p_2 {}^\\gga)t_\\gga}{(1-\\rho_1-\\rho_4)(1-\\rho_3)^2} \\Bigg\\}\\Ee \\, \\go CCC \\q\n\\end{multline}\nwith $J_3$, $J_4$ and $J_5$ being the cohomology terms \\eq{Result2}, \\eq{Result3} and \\eq{Result4}, respectively.\n(Details of the derivation are presented in Appendix C (p.\\pageref{AppD}).)\n\nNote that schematically\n \\begin{equation}\\label{NoDistrib}\n G_1+G_2+G_3 = \\int d\\Gamma\\, \\delta(\\xi_3)\n z_\\ga g ^\\ga(y,t,p_1,p_2,p_3\\vert \\rho ,\\xi ) \\Ee \\, \\go CCC\\,+ J_3+J_4+ J_5\\q\n\\end{equation}\n as expected . Let us stress that $g^\\ga(y,t,p_1,p_2,p_3\\vert \\rho ,\\xi)$ on the \\rhs of \\eq{NoDistrib} is\n free from a distributional behaviour.\n\n\n\\section{Final step of calculation}\n\\label{proof}\n\n\n Here this is shown that the sum of\n the \\rhss of Eqs.~\\eqref{FRest1}-\\eqref{FRest3} gives a $Z$-independent\n cohomology term up to terms in $\\Hp$.\n\n More in detail, the expression $ G_1+G_2+G_3 $\n of the form \\eq{NoDistrib} consists of two types of\nterms with the pre-exponential of degree four and six in $z, y,t,p_1,p_2,p_3$, respectively.\nThat with degree-four pre-exponential separately equals a $Z$-independent\n cohomology term up to terms in $\\Hp$. This is considered in Section \\ref{DVOJNYE}.\nThe term with degree-six pre-exponential is considered in Section \\ref{TROJNYE}.\nAs a result of these calculations $J_6$ \\eq{Result5} and $ J_7$ \\eq{Result6} are obtained.\n\n \\subsection{Degree-four pre-exponential}\n \\label{DVOJNYE} Consider the sum of expressions with $z$-dependent degree-four\n pre-exponential\nfrom Eqs.~ \\eqref{FRest1}, \\eqref{FRest2} and \\eq{FRest3}, denoting it as $S_4$.\n Partial integration yields\n \\bee\\label{lostcohomo}&&S_4\\approx J_6 +\\ff{ \\eta^2 }{4 }\\int d\\Gamma \\, \\delta(\\xi_3)\\\n \\Big[\n \\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3)(\\gr_1+\\gr_4) }\n {t}{}^\\ga z _\\ga ( p_3+p_2)^{\\gga}( {t}-\\tilde{t}\n ){}_{\\gga} \\\\ \\nn &&\n + \\ff{ \\gr_2 \\gr_4}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2(\\gr_1+\\gr_4)^2}\n {t}{}^\\gga z_\\gga \\big(y + \\Pz{}\\big)^\\ga {t}{} _{\\ga}\n \\\\ \\nn&&\n +\n \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2(\\gr_1+\\gr_4)}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1-\\gr_4){t}{}\n \\big) _{\\gga} z^\\ga {t}{}_{\\ga}\n \\\\ \\nn &&\n+ \\ff{ \\gr_2}{(1- \\gr_1 -\\gr_4)^2(1-\\gr_3)(\\gr_1+\\gr_4)} {t}{}^{\\ga}z_{\\ga}\n \\, ( p_3+p_2)^{\\gga} (y+\\tilde{t}{})_{\\gga}\n \\\\ \\nn &&\n \n + \\ff{ \\gr_2}{(1- \\gr_1 -\\gr_4)^2(1-\\gr_3)^2( \\gr_1+\\gr_4 ) }\n\\big( -\\Pz{}+ \\tilde{t}{} \\big)^\\gga \\big( y + \\tilde{t}{} \\big)_{\\gga} z^\\ga {t}{}_{\\ga}\n \\Big] \\Ee\\go CCC\\q\n\\eee\nwhere the cohomology term $J_6$ is given in \\eq{Result5}\\,.\nIt is not hard to see that the\nintegrand of the remaining term is zero by virtue of the Schouten identity.\n\n\\subsection{Degree-six pre-exponential}\n\\label{TROJNYE}\n\nTerms of this type either appear in \\eqref{FRest1}, \\eqref{FRest2} via differentiation\n in $\\gr_j$ or in \\eqref{FRest3} via differentiation in $\\gx_j$.\nDenoting a sum of these terms as $S_6$ we obtain\n \\bee\\label{SUM3} &&S_6= +\\ff{ \\eta^2 }{4 }\\int d\\Gamma \\, \\delta(\\xi_3) \n \\Big\\{\n \\Ez (y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga\n (p_1+p_2 ){} _\\ga \\Big[\n (\\overrightarrow{\\p}_{\\gr_2}-\\overrightarrow{\\p}_{\\gr_3})E \\Big]\\qquad\n \\\\ \\nn&&\n + \\Ez\n \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n \\Big[\n \n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga} z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\Big]\n [ \\overrightarrow{\\p}_{\\gr_4}-\\overrightarrow{\\p}_{\\gr_1}] E\n \\\\ \\nn&&\n + \\Ez\n \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n [\\overrightarrow{\\p}_{\\gr_2}-\\overrightarrow{\\p}_{\\gr_1} ]E\n \\\\\\nn &&\n \n+i \\gx_1\\Big[\n + \\Big\\{\n +\\ff{ \\gr_2\\gr_2}{(1- \\gr_1 -\\gr_4)^3(1-\\gr_3)^3( \\gr_1+\\gr_4 ) }\n\\big( y+ (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} )+ (1 -\\gr_4 ){t}{} \\big)^\\gga \\big( y + \\tilde{t}{} \\big)_{\\gga} z_\\ga {t}{}^{\\ga}\n \\\\ \\nn &&\n-\\ff{ \\gr_3\\gr_2}{(1- \\gr_1 -\\gr_4)^3(1-\\gr_3)^3 }\n {\\big( y + \\tilde{t}{} \\big)^\\gga z_{\\gga} {t}{}^{\\ga} y_{\\ga}}\n \\\\ \\nn &&\n + \\ff{ \\gr_2}{(1- \\gr_1 -\\gr_4)^2(1-\\gr_3)^3 } \\big( y\n + \\tilde{t}{} \\big)^\\gga z_{\\gga} { (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}}\n \\Big\\} \\Ee\\Big]\\time\n \\big(y + \\Pz{} \\big)^\\ga\n (y+\\tilde{t}{})_{\\ga}\\Big\\} \\go CCC\n \\eee\nRecall that the integral measure $\\dr \\Gamma$\\eq{dGamma} contains the factor of $ \\gd(1-\\sum_1^3 \\gx_i)$.\nHence taking into account the factor of $\\gd(\\gx_3)$ on the \\rhs of \\eq{SUM3} the\n dependence on $\\gx_2,\\gx_3$ can be eliminated\nby the substitution $\\gx_2\\to 1-\\gx_1$, $\\gx_3\\to 0$. Then we consider\n separately the terms that contain and do not contain $\\xi_1$ in the pre-exponentials.\nAs shown in Appendix D, those with $\\gx_1$-proportional pre-exponentials give $J_7$ \\eq{Result6} up to $\\Hp$,\nwhile those with\n $\\gx_1 $-independent pre-exponentials give zero up to $\\Hp$.\n\n\n\n\\section{Conclusion}\n\nIn this paper starting from $Z$-dominated expression obtained in \\cite{4a3} the manifestly\nspin-local holomorphic vertex $\\Upsilon^{\\eta\\eta}_{\\go CCC}$\nin the equation \\eqref{zeroform}\n is obtained for the $\\go CCC$ ordering.\nBesides evaluation the expression for the vertex,\nour analysis illustrates how $Z$-dominance implies spin-locality.\n\n\nOne of the main technical difficulties towards $Z$-independent expression was uniformization,\nthat is bringing\nthe exponential factors to the same form, for all contributions\n\\eqref{origW1B2}-\\eqref{kuku5} with the least amount of new integration parameters\npossible. Practically, some part of the uniformization procedure heavily used\nthe Generalized Triangle identity of Section \\ref{SecGTid} playing important role in our analysis.\n\n\n\nLet us stress that spin-locality of the vertices\n obtained in \\cite{4a3} follows from $Z$-dominance Lemma.\n However the evaluation the explicit spin-local vertex\n$\\Upsilon^{\\eta^2}_{\\go CCC}$ achieved in this\npaper is technically involved. To derive explicit form of other spin-local vertices\nin this and higher orders a more elegant approach to this problem is\nhighly desirable.\n\n\n\\section*{Acknowledgments}\n\nWe would like to thank Mikhail Vasiliev for fruitful discussions and useful comments on\nthe manuscript.\nWe acknowledge a partial support from the Russian Basic\nResearch Foundation Grant No 20-02-00208.\n The work of OG is partially supported by the FGU FNC SRISA RAS (theme 0065-2019-0007).\n\n\n\n\\newcounter{appendix}\n\\setcounter{appendix}{1}\n\\renewcommand{\\theequation}{\\Alph{appendix}.\\arabic{equation}}\n\\addtocounter{section}{1} \\setcounter{equation}{0}\n \\renewcommand{\\thesection}{\\Alph{appendix}.}\n \\addcontentsline{toc}{section}{\\,\\,\\,\\,\\,\\,\\,Appendix A: $B_3^{\\eta\\eta}$}\n\n\n\n\n\n \\section*{Appendix A: $B_3^{\\eta\\eta}$}\n\\label{AppC}\n\n$B_3^{\\eta\\eta}$ modulo $\\Hp$ terms from \\cite{4a3} is given by\n\\be\n{B}_3^{\\eta\\eta}\\approx-\\frac{\\eta^2}{4} \\int d\\Gamma \\delta(\\xi_3) \\delta(\\rho_4)\\frac{\\T\\rho_2 (z_\\ga y^\\ga)^2}{(\\rho_1+\\rho_2)(\\rho_2+\\rho_3)}\n \\exp\\big(\\KE \\big)CCC\n\\q\\ee\n where $d\\Gamma$ is defined in \\eq{dGamma},\n\\begin{equation}\n\\KE=i\\T z_\\ga\\left(y^\\ga+\\PP_0^\\ga\\right)\n+\\frac{i(1-\\xi_1) \\rho_2}{\\rho_1+\\rho_2}y^\\ga (p_{1\\ga}+p_{2\\ga})\n+\\frac{i\\xi_1 \\rho_2}{\\rho_2+\\rho_3}y^\\ga (p_{2\\ga}+p_{3\\ga})-iy^\\ga p_{2\\ga}\\q\n\\end{equation}\n \\begin{equation}\n\\PP_0=(1-\\rho_1)(p_1+p_2)-(1-\\rho_3)(p_2+p_3).\n\\end{equation}\nPerforming partial integration with respect to $\\T$ twice we obtain\n\\be\\label{B31}\n{B}_3^{\\eta\\eta}\\approx\\frac{\\eta^2}{4} \\int d\\Gamma\\frac{\\delta(\\xi_3)\\delta(\\rho_4)\\rho_2}{(1-\\rho_3)(1-\\rho_1)}\n \\Big[\\delta(\\T)+iz_\\ga \\PP_0^\\ga+iz_\\ga \\PP_0^\\ga\n\\Big(1+i\\T z_\\ga \\PP_0^\\ga\\Big)\\Big] \\exp\\big( \\KE\\big)CCC\n\\,.\n\\ee Noticing that\n\\begin{equation}\n\\frac{\\p}{\\p \\rho_1 } \\KE\n=-i\\T z_\\ga (p_1 {}^\\ga+p_2 {}^\\ga)-i\\frac{(1-\\xi_1)\\rho_2}{(\\rho_1+\\rho_2)^2}y^\\ga(p_{1\\ga}+p_{2\\ga}),\n\\end{equation}\n\\begin{equation}\n\\frac{\\p}{\\p \\rho_3} \\KE=\\\\\n=i\\T z_\\ga (p_2 {}^\\ga + p_3 {}^\\ga)-i\\frac{\\xi_1 \\rho_2}{(\\rho_2+\\rho_3)^2}\ny^\\ga (p_{2\\ga}+p_{3\\ga})\n\\end{equation}\nand\nperforming partial integration with respect to $\\rho_1$ and $\\rho_3$ we obtain\n\\begin{multline}\n{B}_3^{\\eta\\eta}\\approx\\frac{i\\eta^2}{4}\\int d\\Gamma\n\\frac{\\delta(\\xi_3)\\delta(\\rho_4)}{(1-\\rho_3)(1-\\rho_1)}\n\\Bigg[ {-i\\rho_2\n\\delta(\\T)}\n + \\, z_\\ga \\PP_0^\\ga \\big({(1-\\rho_3)(1-\\rho_1)}\\left(\\delta(\\rho_1)+\\delta(\\rho_3)\\right)\n-1\\big) \\\\\n- { i\\,\\rho_2 z_\\ga\n\\PP_0^\\ga} \\left( \\xi_2 \\frac{y^\\ga(p_{1\\ga}+p_{2\\ga})}{(\\rho_1+\\rho_2)}\n+ \\xi_1 \\frac{y^\\ga(p_{2\\ga}+p_{3\\ga})}{(\\rho_2+\\rho_3)}\\right)\\Bigg]\\exp\\big(\\KE \\big) CCC.\n\\end{multline}\nObserving that\n\\begin{equation}\n\\frac{\\p \\KE}{\\p \\xi_1}=\\frac{i\\rho_2}{\\rho_2+\\rho_3} y^\\ga (p_{2\\ga}+p_{3\\ga})-\\frac{i\\rho_2}{\\rho_1+\\rho_2} y^\\ga (p_{1\\ga}+p_{2\\ga})\n\\end{equation}\nand using the Schouten identity\n\\begin{equation}\nz_\\ga (p_2 {}^\\ga+p_3 {}^\\ga) y^\\beta (p_{1\\beta}+p_{2\\beta})=z_\\ga y^\\ga (p_2 {}^\\beta +p_3 {}^\\beta)(p_{1\\beta}+p_{2\\beta})+z_\\ga (p_1 {}^\\ga+p_2 {}^\\ga) y^\\beta(p_{2\\beta}+p_{3\\beta})\n\\end{equation}\n after partial integration with respect to $\\xi_1$ we obtain\n\\begin{multline}\\label{B3modH=1406}\n{B}_3^{\\eta\\eta}\\approx\\frac{i\\eta^2}{4}\\int d\\Gamma\n\\frac{\\delta(\\xi_3)\\delta(\\rho_4)}{(1-\\rho_3)(1-\\rho_1)}\n\\Bigg[ {-i\\rho_2\n\\delta(\\T)}+ {z_\\ga(p_1 {}^\\ga+p_2 {}^\\ga)(1-\\rho_1)} \\Big(\\delta(\\xi_2)-\\delta(\\xi_1)\\Big)\n\\\\\n+ z_\\ga \\PP_0^\\ga \\Big[(1-\\rho_1 )(1-\\rho_3)\\Big(\\delta(\\rho_1)\n+\\delta(\\rho_3)\\Big)-\\delta(\\xi_2) \\xi_1\\Big]\n+i\\rho_2 z_\\ga y^\\ga (p_{1\\beta}+p_{2\\beta})(p_2 {}^\\beta+p_3 {}^\\beta)\n\\Bigg]\\exp\\big(\\KE \\big) CCC.\n\\end{multline}\nThe $\\delta(\\T)$-proportional term gives rise to $J_1$ \\eq{go B3modHcoh} and $J_2$ \\eq{dx B3modHcoh}.\n\n\n\n\n\n\n\\addtocounter{appendix}{1}\n\\renewcommand{\\theequation}{\\Alph{appendix}.\\arabic{equation}}\n\\addtocounter{section}{1} \\setcounter{equation}{0}\n \\addcontentsline{toc}{section}{\\,\\,\\,\\,\\,\\,\\,Appendix B: Details of uniformization}\n\n\n\n\n\n\\section*{Appendix B: Uniformization Detail }\n\n\\label{Auniform}\nHere some details of the transformation of\nintegrands \\eqref{RRwB3modH+}--\\eqref{FFFFFFFFk=}\\, to the form \\eq{comexp} are presented.\n\nUniformization can be easily achieved for Eqs.~\\eq{RRwB3modH+} and \\eq{RRdxB3modH+} modulo $\\gd(\\gr_1)$-proportional terms.\n Indeed, eliminating $\\gd(\\gr_1)$-proportional term from the \\rhs of \\eq{RRwB3modH+}, adding an integration parameter\n $ \\gr_4 $ and a factor of $\\gd(\\gr_4 )$,\none obtains \\eqref{F1}.\nAnalogously, eliminating $\\gd(\\gr_1)$-proportional term from the \\rhs \\eq{RRdxB3modH+},\nadding an integration parameter\n $ \\gr_4 $,\nswapping $ \\gr_1\\leftrightarrow \\gr_4$ and then adding\n a factor of $\\gd(\\gr_1 )$\none obtains \\eqref{F2}.\n\n\n\n\n\nTo transform integrands of Eqs.~\\eq{W2C3gr1} and \\eq{FFFFFFFFk=}, as well as\n$\\gd(\\gr_1)$-proportional terms of the integrands of Eqs.~\\eq{RRwB3modH+} and \\eq{RRdxB3modH+},\nto the\n form \\eq{comexp}\n GT identity \\eq{GTH+F} is used in Sections { B.1} and { B.2}.\n\n\n \\subsection{ $d_x B_2 {} + W_2 * C $}\n \\label{GTdxB2+}\n\n\n\nNoticing that the exponential of \\eqref{W2C3gr1} coincides with $\\Ee$ at $\\xi_2=0$, while the exponential of \\eqref{FFFFFFFFk=} coincides with $\\Ee$ \\eqref{Ee}\n at $\\xi_1=0$,\n one can easily make sure, that\n only\n the $\\gd(\\gx_2)$-proportional term of\n\\eq{W2C3gr1} and the $\\gd(\\gr_1)$-proportional term of \\eq{FFFFFFFFk=} have the desired\n form \\eq{comexp}.\n\nUsing that $\\Ee$ \\eqref{Ee} does not depend on $\\gx_3$, swapping $\\gx_3 \\leftrightarrow \\gx_1$ in\nthe remaining part of \\eq{FFFFFFFFk=}, then swapping $\\gx_3 \\leftrightarrow \\gx_2$\nin the remaining part of \\eq{W2C3gr1}, one then can apply GT identity \\eqref{GTH+==0} to the sum of the\ntwo obtained\n terms .\n As a result, Eqs.~\\eqref{W2C3gr1}, \\eqref{FFFFFFFFk=}\nyield\n \\bee\\label{D4}&&\n\\dr_x B_2^{\\eta\\, loc}+W_{2\\, \\go CC}^{\\eta\\eta}\\ast C\\approx\\frac{\\eta^2}{4}\\int d\\Gamma\\, \\delta(\\rho_3)\\delta(\\xi_3)\n\\Big[-i\\frac{(z_\\ga t^\\ga)}{\\rho_1+\\rho_4}\n\\delta(\\xi_2)-i(\\underline{z_\\ga y^\\ga}) \\delta(\\rho_1)\\Big] \\mathcal{E} \\go CCC\\qquad\\\\\\nn&&+\n\\frac{\\eta^2}{4}\n\\int d\\Gamma\\, \\delta(\\rho_3)\\Big[i\\delta(\\rho_4)-t^\\gga(p_{1\\gga}+p_{2\\gga})\\Big]\n\\Big\\{\\delta(\\T) \\widetilde{t}^\\ga y_\\ga\n + \\delta(\\xi_3)\n (z_\\ga \\widetilde{t}^\\ga+\\underline{z_\\ga y^\\ga})\\Big\\}\\mathcal{E} \\go CCC\\q\n\\eee\nwhere the terms in the second row of formula \\eq{D4} result from applying $GT$ -identity.\nRewriting the underlined part as the result of differentiation with respect to $\\T$ and\nperforming partial integration one obtains Eq.~\\eqref{F3} plus the cohomology term $J_8$\n \\eqref{Result1}.\n\n\n\n\\subsection{ $(\\dr_x B^{\\eta\\eta}_3{}+\\go* B^{\\eta\\eta}_3)|_{\\gd(\\gr_1)}+W^\\eta_{1\\, \\go C}*B^{\\eta\\, loc}_2$}\n\n \\label{GTB3des}\n\nUniformization of the sum of $\\gd(\\gr_1)-$proportional terms on the \\rhss of \\eq{RRdxB3modH+} and \\eq{RRwB3modH+}\nis done with the help of $GT$ identity \\eqref{GTH+==0} as follows.\nDenoting \\be\n\\widetilde{P}= y+ p_1+p_2+{t} - \\gr_2 (p_3+p_2)\n\\ee\none can see that partial integration in $\\T$ yields\n\\begin{multline}\\label{D6}\n\\dr_x {B}^{\\eta\\eta}_3 \\bigg|_{\\delta(\\rho_1)}\\approx-\\frac{i\\eta^2}{4}\\int d\\Gamma\\,\n\\delta(\\rho_4)\\delta(\\rho_1)\\delta(\\xi_1) \\Big[i\\delta(\\T)-z_\\ga y^\\ga\\Big]\n\\exp\\Big\\{i\\T z_\\ga \\widetilde{P}^\\ga-i\\xi_2 \\widetilde{P}^\\ga y_\\ga \\\\\n+i(1-\\rho_2)(p_2 {}^\\ga +p_3 {}^\\ga)y_\\ga+ip_{3\\ga} y^\\ga+it^\\beta p_{1\\beta} \\Big\\}\\go CCC,\n\\end{multline}\n\\begin{multline}\\label{D7}\n\\go\\ast {B}^{\\eta\\eta}_3\\bigg|_{\\delta(\\rho_1)}\\approx \\frac{i\\eta^2}{4}\\int d\\Gamma\\, \\delta(\\rho_4)\\delta(\\rho_1)\\delta(\\xi_3)\\Big[i\\delta(\\T)-z_\\ga(y^\\ga+t^\\ga)\\Big]\\exp\\Big\\{i\\T z_\\ga \\widetilde{P}^\\ga-i\\xi_2 \\widetilde{P}^\\ga y_\\ga \\\\\n+i\\xi_1 \\widetilde{P}^\\ga t_\\ga+i(1-\\rho_2)(p_2 {}^\\ga +p_3 {}^\\ga)y_\\ga+ip_{3\\ga} y^\\ga+it^\\beta p_{1\\beta} \\Big\\}\\go CCC\\,.\n\\end{multline}\n\n\n\n\n\n\n\n\n\nThe sum of \\eqref{D6} and \\eqref{D7} gives\n\\begin{multline}\n\\Big(\\dr_x {B}^{\\eta\\eta}_3+\\omega \\ast B_3^{\\eta\\eta}\\Big)\\bigg|_{\\gd(\\rho_1)} \\approx\n \\frac{i\\eta^2}{4}\\int d\\Gamma\\,\\gd(\\rho_4)\\gd(\\rho_1)\\Big[z_\\gga(-t^\\gga-y^\\ga)\\gd(\\xi_3)+z_\\gga y^\\gga\\gd(\\xi_1)+z_\\gga t^\\gga\\gd(\\xi_2)\\Big]\\times\\\\\n\\times \\exp\\Big\\{i\\T z_\\ga \\widetilde{P}^\\ga-i\\xi_2 \\widetilde{P}^\\ga y_\\ga\n+i\\xi_1 \\widetilde{P}^\\ga t_\\ga+i(1-\\rho_2)(p_2 {}^\\ga +p_3 {}^\\ga)y_\\ga+ip_{3\\ga} y^\\ga+it^\\gb p_{1\\gb} \\Big\\} \\go CCC \\\\\n-\\frac{i\\eta^2}{4}\\int d\\Gamma\\, \\gd(\\rho_4)\\gd(\\rho_1)\n(z_\\gga t^\\gga)\\gd(\\xi_2)\\exp\\Big\\{i\\T z_\\ga \\widetilde{P}^\\ga-i\\xi_2 \\widetilde{P}^\\ga y_\\ga\n+i\\xi_1 \\widetilde{P}^\\ga t_\\ga+i(1-\\rho_2)(p_2 {}^\\ga +p_3 {}^\\ga)y_\\ga \\\\\n+ip_{3\\ga} y^\\ga+it^\\gb p_{1\\gb}\\Big\\}\\go CCC \\,+ J_9+J_{10}\n \\label{ERRGT}\\end{multline}\n with $J_9$ \\eq{goB3modH1406gr1C} and $J_{10}$ \\eq{dxB3modH1406gr1C}.\nBy virtue of GT identity \\eqref{GTH+==0} the first term weakly equals $J_{11}$ \\eq{ERRGTC}.\n Finally, Eq.~\\eq{ERRGT} yields\n\\begin{multline} \\label{dB3+wB3}\n\\Big(\\dr_x {B}^{\\eta\\eta}_3+\\omega \\ast B_3^{\\eta\\eta}\\Big)\\bigg|_{\\gd(\\rho_1)} \\approx\n-\\frac{i\\eta^2}{4}\\int d\\Gamma\\, \\gd(\\rho_4)\\gd(\\rho_1) (z_\\gga t^\\gga)\\gd(\\xi_2)\\exp\\Big\\{i\\T z_\\ga \\widetilde{P}^\\ga-i\\xi_2 \\widetilde{P}^\\ga y_\\ga\n \\\\\n+i\\xi_1 \\widetilde{P}^\\ga t_\\ga+i(1-\\rho_2)(p_2 {}^\\ga +p_3 {}^\\ga)y_\\ga+ip_{3\\ga} y^\\ga+it^\\gb p_{1\\gb}\\Big\\}\n\\go CCC\\, + J_9+J_{10}+J_{11}.\n\\end{multline}\nConsider $W_{1 \\go C}^\\eta \\ast B_2^{\\eta\\, loc}$ \\eq{origW1B2}.\nThis is convenient to change integration variables,\nmoving from the integration over simplex to integration over square. As a result\n \\begin{multline}\\label{W1B2mod1}\nW_{1 \\go C}^\\eta \\ast B_2^{\\eta\\, loc}\\approx \\frac{\\eta^2}{4}\\int_0^1 d\\T\\, \\T \\int d^2 \\tau_+\\, \\gd(1-\\tau_1-\\tau_2)\\int_0^1 d\\sigma_1 \\int_0^1 d\\sigma_2\\, (z_\\ga t^\\ga)\\times \\\\\n\\Big[z_\\ga y^\\ga+\\sigma_1 z_\\ga t^\\ga\\Big]\\exp\\Big\\{i\\T z_\\ga y^\\ga+i(1-\\sigma_2)\\sigma_1 t_\\ga p_1 {}^\\ga+i\\sigma_1\\sigma_2 t^\\ga p_{3\\ga}+i(1-\\sigma_1)t^\\ga p_{1\\ga} \\\\\n+i\\T z_\\ga \\Big((\\tau_1+\\tau_2 \\sigma_1)t^\\ga+\\tau_1 p_1 {}^\\ga-(\\tau_2-\\tau_1(1-\\sigma_2))p_2 {}^\\ga-(\\tau_2+\\sigma_2\\tau_1)p_3 {}^\\ga\\Big)+i\\sigma_1 y^\\ga t_\\ga \\\\\n-i(1-\\sigma_2)y^\\ga p_{2\\ga}+i\\sigma_2 y^\\ga p_{3\\ga}+i\\sigma_2 y^\\ga p_{3\\ga} \\Big\\}\\go CCC.\n\\end{multline} Partial integration with respect to $\\T$\nyields\\begin{multline}\nW_{1 \\go C}^\\eta \\ast B_2^{\\eta\\, loc}\\approx-\\frac{\\eta^2}{4}\\int_0^1 d\\T \\int d^2 \\tau_+\\,\n\\gd(1-\\tau_1-\\tau_2)\\int_0^1 d\\sigma_1 \\int_0^1 d\\sigma_2\\, (z_\\ga t^\\ga)\\times \\\\\n\\Big[\\T z_\\ga \\Big(\\tau_1(p_1 {}^\\ga+p_2 {}^\\ga)-(\\tau_2+\\sigma_2 \\tau_1)(p_2 {}^\\ga +p_3 {}^\\ga)\\Big)\n-i\\T \\tau_1 (1-\\sigma_1) z_\\ga t^\\ga\\Big]\\,\\exp(\\KEE)\\,\\,\\go CCC\\q\n\\end{multline}where\n\\begin{multline}\\label{tildeEe}\n\\KEE=i\\T z_\\ga y^\\ga+it^\\gb p_{1\\gb}+i\\sigma_1\\Big(y^\\ga t_\\ga+(p_1 {}^\\ga+p_2 {}^\\ga)t_\\ga\n-\\sigma_2(p_2 {}^\\ga+p_3 {}^\\ga)t_\\ga\\Big)-i\\big(\\sigma_2 p_3 {}^\\ga-(1-\\sigma_2)p_2 {}^\\ga\\big)y_\\ga \\\\\n+i\\T z_\\ga \\Big(\\tau_1(p_1 {}^\\ga +p_2 {}^\\ga)-(\\tau_2+\\sigma_2\\tau_1)(p_2 {}^\\ga+p_3 {}^\\ga)\n+(\\sigma_1+\\tau_1(1-\\sigma_1))t^\\ga\\Big).\n\\end{multline}\nBy virtue of evident formulas\n\\bee\\nn&&\n\\tau_1 \\left(\\frac{\\p}{\\p \\tau_1}-\\frac{\\p}{\\p \\tau_2}\\right)\n\\KEE=i\\T z_\\ga \\Big(\\tau_1(p_1 +p_2 {} )+\\big[(\\tau_1+\\tau_2)-(\\tau_2+\\sigma_2\\tau_1)\\big]\n(p_2 {} +p_3 {} )+\\tau_1(1-\\sigma_1)t \\Big){}^\\ga \\q\n\\\\ \\nn&&\\frac{\\p}{\\p \\sigma_1}\\KEE=\ni\\T (1-\\tau_1)z_\\ga t^\\ga+i\\Big(y^\\ga+p_1 {}^\\ga+p_2 {}^\\ga-\\sigma_2 (p_2 {}^\\ga+p_3 {}^\\ga)\\Big)t_\\ga,\n\\eee\n Eq.~\\eqref{W1B2mod1} acquires the form\n\\begin{multline}\nW_{1 \\go C}^\\eta \\ast B_2^{\\eta\\, loc}\\approx\\frac{\\eta^2}{4}\\int_0^1 d\\T\\int d^2\n\\tau_+\\gd(1-\\tau_1-\\tau_2)\\int_0^1 d\\sigma_1 \\int_0^1 d\\sigma_2\\bigg[iz_\\ga t^\\ga \\tau_1\n\\left(\\frac{\\p}{\\p \\tau_1}-\\frac{\\p}{\\p \\tau_2}\\right) \\\\\n-\\frac{z_\\ga (p_2 {}^\\ga+ p_3 {}^\\ga)}{1-\\tau_1}\\left(i\\frac{\\p}{\\p \\sigma_1}\n+\\Big(y^\\ga+p_1 {}^\\ga+p_2 {}^\\ga-\\sigma_2(p_2 {}^\\ga+p_3 {}^\\ga)\\Big)t_\\ga\\right)+iz_\\ga t^\\ga\\bigg]\n\\exp(\\KEE )\\go CCC.\n\\end{multline}\nAfter partial integrations in $\\tau_1$,$\\tau_2$ and $\\sigma_1$ one obtains\n\\bee&&\\label{underlC10}\nW_{1 \\go C}^\\eta \\ast B_2^{\\eta\\, loc}\\approx\\frac{\\eta^2}{4}\\int_0^1\nd\\T\\int d^2 \\tau_+\\gd(1-\\tau_1-\\tau_2)\\int_0^1 d\\sigma_1 \\int_0^1 d\\sigma_2\n\\bigg[\\underline{iz_\\ga t^\\ga \\gd(\\tau_2)} \\\\\n&&\\nn+\\frac{z_\\ga (p_2 {}^\\ga+ p_3 {}^\\ga)}{1-\\tau_1}\\left(i\\big(\\gd(\\sigma_1)-\\gd(1-\\sigma_1)\\big)\n-\\Big(y^\\ga+p_1 {}^\\ga+p_2 {}^\\ga-\\sigma_2(p_2 {}^\\ga+p_3 {}^\\ga)\\Big)t_\\ga\\right)\\bigg]\n\\exp (\\KEE )\\go CCC\\,.\n\\eee\nAfter a simple change of integration variables the underlined term on the \\rhs of Eq.~\\eq{underlC10}\n cancels the \\rhs of Eq.~\\eqref{dB3+wB3}. Performing integration with respect to $\\gt_2$\n in the remaining part of \\eq{underlC10},\nafter the following change of the integration variables\n\\bee\\nn&&\n\\int_0^1 d\\sigma_1\\int_0^1 d\\gt_1 \\int_0^1 d\\sigma_2\\, f(\\sigma_1,1-\\sigma_1,\\gt_1,\\sigma_2)\\\\\\nn&&\n=\\int d^4 \\rho_+\\, \\delta\\left(1-\\sum_{j=1}^4 \\rho_j\\right)\\frac{1}{(\\gr_2+\\gr_3)(1-\\gr_2-\\gr_3)}\nf\\left(\\frac{\\gr_1}{1-\\gr_2-\\gr_3},\\frac{\\rho_4}{1-\\gr_2-\\gr_3},\\gr_2+\\gr_3,\\frac{\\gr_2}{\\gr_2+\\gr_3}\\right)\n\\,, \\eee\n $\\exp(\\KEE)$ \\eq{tildeEe} acquires the form $\\Ee$ \\eq{Ee}. As a result, the sum of Eq.~\\eq{underlC10} and Eq.~\\eqref{dB3+wB3} by virtue Eq.~\\eqref{EEgx14=}\nyields Eq.~\\eqref{F4}.\n\n\n\n\\addtocounter{appendix}{1}\n\\renewcommand{\\theequation}{\\Alph{appendix}.\\arabic{equation}}\n\\addtocounter{section}{1} \\setcounter{equation}{0}\n\\setcounter{subsection}{0}\n \\addcontentsline{toc}{section}{\\,\\,\\,\\,\\,\\,\\,Appendix C: Eliminating $\\gd(\\gr_j)$ and $\\gd(\\gx_j)$}\n \\renewcommand{\\thesubsection}{\\Alph{appendix}.\\arabic{subsection}}\n\n\\section*{Appendix C: Eliminating $\\gd(\\gr_j)$ and $\\gd(\\gx_j)$}\n\\label{AppD}\nTo eliminate $\\gd(\\gr_j)$ and $\\gd(\\gx_j)$ from of the \\rhss of Eqs.~\\eqref{F1}, \\eqref{F2}\nthis is convenient to group similar pre-exponential terms as in Sections \\ref{Eli1} -\\ref{Eli4}.\n \\subsection{Terms proportional to $( p_1{}+ p_2)^{\\ga} ( p_3{}+p_2)_{\\ga}$}\n\\label{Eli1}\n\n Consider\n $F_{1,1}+F_{2,1}$ of \\eqref{F1} and \\eqref{F2}, respectively.\n Partial integration with respect to $\\rho_1$ and $\\rho_4$ yields\n \n\\begin{multline}\\label{F11+F21b}\nF_{1,1}+F_{2,1}\\approx - \\frac{\\eta^2}{4}\\int d\\Gamma\\frac{\\delta(\\xi_3)\\rho_2}{(1-\\rho_1-\\rho_4)(1-\\rho_3)}(p_1 {}^\\ga+p_2 {}^\\ga)(p_{2\\ga}+p_{3\\ga}) \\times \\\\\n\\times (z_\\gga \\PP^\\gga)\\left(\\frac{\\p}{\\p \\rho_4}\n-\\frac{\\p}{\\p \\rho_1}\\right)\\Ee \\go CCC.\n\\end{multline}\nBy direct calculation, Eq.~\\eqref{F11+F21b} gives\n\\begin{multline}\nF_{1,1}+F_{2,1}\\approx -\\frac{\\eta^2}{4}\\int d\\Gamma\\frac{\\delta(\\xi_3)\\rho_2}{(1-\\rho_1-\\rho_4)(1-\\rho_3)}(p_1 {}^\\ga+p_2 {}^\\ga)(p_{2\\ga}+p_{3\\ga})\\times\\\\\n\\Bigg[\\Ez \\left(\\frac{\\p}{\\p \\rho_4}-\n\\frac{\\p}{\\p \\rho_1}\\right) (z_\\gga \\PP^\\gga) E+(z_\\gga \\PP^\\gga)\n\\T (z_\\ga t^\\ga)\\mathcal{E} \\Bigg]\\go CCC\\,.\n\\end{multline}\nBy virtue of the Schouten identity\n\\begin{equation}\n z_\\ga t^\\ga (p_1+p_2 )^\\gga ( p_3{} +p_2{} )_\\gga=\n t^\\ga(p_1+p_2 ){} _\\ga z^\\gga ( p_3{} +p_2{} )_\\gga+ t^\\ga ( p_3{} +p_2{} )_\\ga (p_1+p_2 )^\\gga z _\\gga\n\\end{equation}\nand its consequence\n\\begin{multline}\\label{SchCons}\n z_\\ga t^\\ga (p_1+p_2 )^\\gga ( p_3{} +p_2{} )_\\gga \\E\n=t^\\ga(p_{1 }+p_{2 })_\\ga\\left[i\\left(\\frac{\\overleftarrow{\\p}}{\\p \\rho_2}\n-\\frac{\\overleftarrow{\\p}}{\\p \\rho_3}\\right)\\Ez E+i\\Ez\n\\left(\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_3}\\right)E\\right] \\\\\n+t^\\ga (p_{2 }+p_{3 })_\\ga\\left[i\\left(\\frac{\\overleftarrow{\\p} }\n{\\p \\rho_2}-\\frac{\\overleftarrow{\\p}}{\\p \\rho_1}\\right)\\Ez E\n+i\\Ez \\left(\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_1}\\right) E\\right]\\,\n\\end{multline}\nEq.~\\eqref{F11+F21b} yields\n\\begin{multline}\\label{F11+F21}\nF_{1,1}+F_{2,1}\\approx + \\frac{\\eta^2}{4}\\int d\\Gamma\\, \\delta(\\xi_3) \\Bigg\\{\\ff{(z_{\\gga}\\PP ^{\\gga})\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }\\times\\\\\n\\times\\Bigg(( p_1{}+ p_2)^{\\ga} ( p_3{}+p_2)_{\\ga}\n \\Ez \\Bigg[\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_4}\\Bigg] E\n +t^\\ga (p_1+p_2 ){} _\\ga \\Bigg[ \\underline{ \\gd(\\gr_3)} \\Ee\n - \\Ez\\Bigg(\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_3}\\Bigg)E \\Bigg] \\\\\n+t^\\ga ( p_3{} +p_2{} )_\\ga \\Bigg[ \\underline{ \\gd(\\gr_1)}\\Ee\n - \\Ez \\Bigg(\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_1}\\Bigg)E \\Bigg]\n\\Bigg)+\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }\n\\Big( t^\\ga z _\\ga ( p_3+p_2)^{\\gga}(p_1+p_2 ){}_{\\gga} \\Ee\n \\Big)\\\\\n+(z_{\\gga}\\PP ^{\\gga})\\Bigg(- \\ff{1-\\gr_3-\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3)^2 } t^\\ga (p_1+p_2 ){} _\\ga\n \\Ee- \\ff{1- \\gr_1 -\\gr_4-\\gr_2}{(1- \\gr_1 -\\gr_4)^2(1-\\gr_3) }t^\\ga ( p_3{} +p_2{} )_\\ga\n \\Ee \\Bigg)\\Bigg\\}\\go CCC\\,.\n\\end{multline}\nOne can see that $\\delta(\\rho_1) $- and\n$\\delta(\\rho_3) $-proportional terms on the \\rhs of \\eq{F11+F21} (the underlined ones)\ncancel terms $F_{2,4}$ \\eqref{F2} and $F_{3,3}$ \\eqref{F3}, respectively.\n\n\n\n\n\n\n\\subsection{Term proportional to $t^\\ga(p_{1\\ga}+p_{2\\ga})$}\nConsider term $F_{3,5}$ of $F_{3 }$ \\eqref{F3}. By virtue of the following identity\n\\begin{equation}\n\\frac{\\rho_2}{(\\rho_2+\\rho_3)(1-\\rho_3)}\\left(\\delta(\\rho_3)-\\delta(\\rho_2)\\right)=1\n\\end{equation}\n\\begin{multline}\nF_{3,5} \\approx -\\frac{\\eta^2}{4}\\int d\\Gamma\\, \\frac{\\delta(\\xi_3)\\rho_2}{(\\rho_2+\\rho_3)(1-\\rho_3)}\\Big(\\delta(\\rho_3)-\\delta(\\rho_2)\\Big)\n\\\\ \\Big[ ( p_2{}_\\ga+ p_1{}_\\ga) t^{\\ga}\n (z_{\\gga}t^\\gga)\\Big( (1-\\gr_4) -\\ff{\\gr_1}{(\\gr_1+\\gr_4)} \\Big)\\Ee \\Big]\\go CCC.\n\\end{multline}\nPartial integrations along with the Schouten identity\n\\begin{equation}\nt^\\ga(p_{1\\ga}+p_{2\\ga} ) ( p_3{}^\\gga +p_2{}^\\gga ) z_\\gga\n= - \\underline{t^\\ga z _\\ga} (p_1{}^\\gga+p_2{}^\\gga ) ( p_{2\\gga}+p_{3\\gga})\n+ t^\\ga ( p_{3\\ga} +p_{2\\ga} ) \\underline{ (p_1+p_2 )^\\gga z _\\gga}\n\\end{equation}\nand realization of the underlined terms as derivative of $\\Ez$\n along with further partial integration\nyields\n\\begin{multline}\\label{F35=}\nF_{3,5}\\approx -\\frac{\\eta^2}{4}\\int d\\Gamma\\, \\delta(\\xi_3)\\Bigg[\\ff{ \\gr_4}{ (1-\\gr_3)^2} \\Big( ( p_2{}_\\ga+ p_1{}_\\ga) t^{\\ga} z_{\\gga}t^\\gga \\Big)\\Ee \\\\\n+\\ff{\\gr_2\\gr_4}{(\\gr_1+\\gr_4)(1-\\gr_3)} \\Big( ( p_2{}_\\ga+ p_1{}_\\ga) t^{\\ga}z_{\\gga}t^\\gga \\Big)\\Ez\\left[\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_3}\\right]E+ \\ff{\\gr_2\\gr_4}{(\\gr_1+\\gr_4)(1-\\gr_3)} (z_{\\ga}t^\\ga) \\times\\\\\n\\times\\Bigg( - (p_1+p_2 )^\\gga ( p_3{} +p_2{} )_\\gga \\Ez \\Bigg[\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_4}\\Bigg]E\n- \\underline{\\gd(\\gr_1)} (p_1+p_2 )^\\gga ( p_3{} +p_2{} )_\\gga\\Ee\\\\\n - t^\\ga( ( p_3{} +p_2{} )_\\ga )\\Ez \\Bigg[\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_2}\\Bigg]E\n- \\underline{\\gd(\\gr_1)}t^\\ga( ( p_3{} +p_2{} )_\\ga )\\Ee\\Bigg)\\\\\n+(z_{\\ga}t^\\ga)\\Bigg(\n\\ff{ \\gr_2}{ (1-\\gr_3) (\\gr_1+\\gr_4)} (p_1+p_2 )^\\gga ( p_3{} +p_2{} )_\\gga\\Ee\n + \\ff{\\gr_4 }{(\\gr_1+\\gr_4)^2} t^\\ga( ( p_3{} +p_2{} )_\\ga )\\Ee\n\\Bigg) \\Bigg]\\go CCC.\n\\end{multline}\nOne can see that the sum of the underlined $\\delta(\\rho_1)$-proportional terms cancel $F_{2,2}+F_{2,3}$ of \\eqref{F2}.\n\n\n\n\n\n\n\n\\subsection{Sum of $( p_1{}+ p_2)^{\\ga} ( p_3{}+p_2)_{\\ga}$-proportional and\n $t^\\ga(p_{1\\ga}+p_{2\\ga})$--proportional terms}\n\nSumming up $F_{1,1}+F_{2,1} $ \\eqref{F11+F21},\n $F_{3,3}$ \\eqref{F3},\n$F_{3,5}$ \\eqref{F35=} and $F_{2,2}+F_{2,3}+F_{2,4}$ \\eqref{F2}, then performing partial integrations\nand using the following simple identities\n\\begin{equation}\n(1-\\gr_4) -\\ff{\\gr_1}{(\\gr_1+\\gr_4)}=\n \\ff{\\gr_4( \\gr_2+\\gr_3)}{(\\gr_1+\\gr_4)},\n\\end{equation}\n\\begin{equation}\n- \\ff{\\gr_4 }{(\\gr_1+\\gr_4)^2} + \\ff{\\gr_4}{( \\gr_1+\\gr_4)}\n\\ff{ \\gr_3}{(1- \\gr_1 -\\gr_4) (1-\\gr_3) }\n= \\ff{-\\gr_2\\gr_4 }{(\\gr_1+\\gr_4)^2(1- \\gr_1 -\\gr_4) (1-\\gr_3)}\\q\n\\end{equation}\none obtains by virtue of Eqs.~\\eq{tildet}-\\eq{PP=} \n\\be \\label{FRest1=}\n F_{1,1}+F_{2,1}+F_{2,4}+F_{3,3}+F_{3,5}+F_{2,2}+F_{2,3}=G_1 \\ee\n with $G_1$ \\eq{FRest1}.\n\n\n \\subsection{Terms proportional to $\\delta(\\xi_1)-\\delta(\\xi_2)$}\n\\label{Eli3}\n\n\n\nConsider a sum of\n$F_{1,4}$ \\eqref{F1} and $F_{2,8}$ \\eqref{F2}.\nPerforming partial integrations with respect to $\\rho_1$ and $\\rho_4$, then applying the Schouten identity\n one obtains\n\\begin{multline}\\label{F14+F28}\n F_{1,4}+F_{2,8}\\approx -\\frac{\\eta^2}{4}\\int d\\Gamma \\, \\delta(\\xi_3)\n \\Bigg[\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_4}\\Bigg]\\frac{i z_\\ga (p_1 {}^\\ga+p_2 {}^\\ga)}{1-\\rho_3}\n \\Big(\\delta(\\xi_2)-\\delta(\\xi_1)\\Big)\\Ee \\,\\go CCC=\\\\\n=-\\frac{\\eta^2}{4}\\int d\\Gamma \\,\n\\delta(\\xi_3)\\Big(\\delta(\\xi_2)-\\delta(\\xi_1)\\Big)\\Bigg\\{\\frac{i\\, z_\\gga t^\\gga}{(1-\\rho_3)}\n\\Bigg(\\Ez\\Bigg[\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_2}\\Bigg]E+\\Big(\\underline{\\delta(\\rho_1)}\n-\\underline{\\underline{\\delta(\\rho_2)}}\\Big)\\Ee\\Bigg) \\\\\n+\\frac{i\\, z_\\ga (p_1 {}^\\ga+p_2 {}^\\ga)}{(1-\\rho_3)}\\Ez\n\\Bigg[\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_4}\\Bigg]E\\Bigg\\}\\go CCC.\n\\end{multline}\n\nThe underlined $ \\gd(\\gr_1)$-proportional term compensates $F_{2,9}$ of \\eqref{F2}.\nThe double underlined $\\gd(\\gr_2)$-proportional term vanishes due to the factor of $(\\gd(\\gx_2)-\\gd(\\gx_1))$\nwhich after partial integrations in $\\xi_1$ and $\\xi_2$ produces an expression\nproportional to $\\rho_2$.\n\n\nSumming up $F_{1,4}+F_{2,8}$ \\eq{F14+F28} and $F_{2,9}$ \\eqref{F2},\nperforming partial integrations with respect to $\\gx$ and $\\T$ along with the Schouten identity\none obtains \\be \\label{FRest2G}\n F_{1,4}+F_{2,8}+F_{2,9}\\approx G_2\n\\ee\nwith $G_2$ \\eq{FRest2}.\n\n\n \\subsection{Terms proportional to $\\xi_1 \\delta(\\xi_2)$ }\n\\label{Eli4}\n\nConsider a sum of $F_{1,3}$ \\eqref{F1}, $F_{2,6}$ \\eqref{F2} and $F_{4,1}$ \\eqref{F4}.\n\\bee\nF_{1,3}+F_{2,6}+F_{4,1}\\approx \\frac{i\\eta^2}{4}\n\\int d\\Gamma\\,\\ff{ \\delta(\\xi_3)\\delta(\\xi_2)[\\delta(\\rho_1)-\\delta(\\rho_4)]}{(\\rho_2+\\rho_3)} z_\\ga\n\\bigg\\{\\frac{\\PP^\\ga\n}{(1-\\rho_3)}\n- \\frac{\\xi_1 \\,(p_2 {}^\\ga+p_3 {}^\\ga)}{\n(\\rho_1+\\rho_4)}\n\\bigg\\}\\Ee\\, \\go CCC.\\quad\n\\eee\nPartial integration yields\n\\begin{multline}\\label{FRest2_5-}\nF_{1,3}+F_{2,6}+F_{4,1}\\approx\\frac{i\\eta^2}{4}\\int d\\Gamma\\,\n\\gd(\\xi_3)\\gd(\\xi_2)\\xi_1 \\Bigg\\{z_\\ga t^\\ga\\Bigg[\\frac{1}{\\rho_1+\\rho_4}\n\\bigg(\\Ez\\bigg[\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_3}\\bigg]E+\\Big[\\underline{\\gd(\\rho_2)}\n-\\gd(\\rho_3)\\Big]\\Ee\\bigg)\n \\\\\n+\\frac{1}{1-\\rho_3}\\bigg(\\Ez \\bigg[\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_2}\\bigg]E\n+\\Big[\\gd(\\rho_1)-\\underline{\\gd(\\rho_2)}\\Big]\\Ee\\bigg)\\Bigg] \\\\\n+\\bigg[\\frac{z_\\ga(p_2 {}^\\ga+p_3 {}^\\ga)}{\\rho_1+\\rho_4}\n+\\frac{z_\\ga(p_1 {}^\\ga+p_2 {}^\\ga)}{1-\\rho_3}\\bigg]\\Ez\n\\bigg[\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_4}\\bigg]E\\Bigg\\}\\go CCC\n\\,.\\end{multline}\nOne can see that the underlined $\\gd({\\gr_2})$-proportional terms vanish\ndue to the factor of $\\gd(1-\\sum\\gr_i)$ \\eq{dGamma}, while $\\gd({\\gr_1})$-proportional term compensates\n $F_{2,7}$\n\\eqref{F2} and $\\gd({\\gr_3})$-proportional term\ncompensates $F_{3,2}$ \\eqref{F3}.\n\nSumming up $F_{2,7}$ \\eqref{F2}, $F_{3,2}$ \\eqref{F2}, $F_{4,2}$ and\n$F_{1,3}+F_{2,6}+F_{4,1}$ \\eqref{F4},\nand then\nperforming partial integration in $\\T$ one obtains by virtue of the Schouten\nidentity\n\\begin{multline}\\label{G3=}\nF_{1,3}+F_{2,6}+F_{4,1}+F_{2,7}+F_{3,2}+F_{4,2}\\approx G_3:=\\frac{\\eta^2}{4}\\int d\\Gamma\\, \\delta(\\xi_3)\\delta(\\xi_2)\\times\\\\\n\\times\\Bigg\\{\\frac{\\rho_2\\, (z_\\ga t^\\ga)(p_2 {}^\\gga+p_3 {}^\\gga)(y_\\gga\n+\\tilde{t}_\\gga)}{(1-\\rho_1-\\rho_4)^2 (1-\\rho_3)(\\rho_1+\\rho_4)}\n+ \\frac{\\rho_2\\, \\Big[(\\tilde{t}^\\gga+y^\\gga)(y_\\gga+\\Pz_\\gga) (z^\\ga t_\\ga)+i\\delta(\\T)\nt_\\gga(\\tilde{t}^\\gga-\\Pz^\\gga)\\Big]}{(1-\\rho_1-\\rho_4)^2 (1-\\rho_3)^2 (\\rho_1+\\rho_4)} \\\\\n+\\frac{\\rho_3\\, \\big[i\\delta(\\T)-z_\\gga (y^\\gga+\\tilde{t}^\\gga)\\big]\n(t^\\ga y_\\ga)}{(1-\\rho_1-\\rho_4)^2 (1-\\rho_3)^2}\n+\\frac{\\big[-i\\delta(\\T)+z_\\gga (y^\\gga+\\tilde{t}^\\gga)\\big]\n(p_1 {}^\\ga+p_2 {}^\\ga)t_\\ga}{(1-\\rho_1-\\rho_4)(1-\\rho_3)^2} \\Bigg\\}\\Ee\\go CCC\\,.\n\\end{multline}\nSince by the partial integration procedure $\n \\gx_1\\gd(\\gx_2)\\equiv {1}+ \\gx_1(\\p_{\\gx_1}-\\p_{\\gx_2})$,\n \\eq{G3=} yields $G_{ 3}$ \\eq{FRest3}.\n\n\n\n\n\\addtocounter{appendix}{1}\n\\renewcommand{\\theequation}{\\Alph{appendix}.\\arabic{equation}}\n\\addtocounter{section}{1}\n\\setcounter{equation}{0}\n \\addcontentsline{toc}{section}{\\,\\,\\,\\,\\,\\,\\,Appendix D: Details of the final step of the calculation}\n \\renewcommand{\\thesubsection}{\\Alph{appendix}.\\arabic{subsection}}\n\\setcounter{subsection}{0}\n \\renewcommand{\\thesection}{\\Alph{appendix}}\n\n\\section*{Appendix D: Details of the final step of the calculation}\n\\label{AppE}\n\nBy virtue of Eqs.~\\eq{EEgx14=}-\\eq{Egx=21e}, Eq.~\\eq{SUM3} yields \\bee&&\n\n\\label{SUM=} S_6\n = + i \\ff{ \\eta^2 }{4 }\\int d\\Gamma \\, \\delta(\\xi_3) \\\\ \\nn&&\n \\Big\\{\n \n+(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n\\gx_1 \\ff{1-\\gr_3-\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )^2}\\,\\, \\Pz{}^\\ga y_{\\ga}\n\\\\ \\nn&&\n+(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n \\gx_1 \\ff{1-\\gr_3-\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )^2}\\big(y + \\Pz{} \\big)^\\ga\\tilde{t}{}_{\\ga}\n\\\\ \\nn&&\n+(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n (-) \\gx_1 \\ff{\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )}\\,(p_3{}^{\\ga}+p_2{}^{\\ga}) y_{\\ga}\n\\\\ \\nn&&\n-(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n \\gx_1 \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)}(p_3{} +p_2{})^{\\gb}\\tilde{t}{}_{\\gb}\n+ \n\\eee\n\\bee \\nn&&\n+(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n(-) \\ff{ \\gr_1+\\gr_4}{ (1-\\gr_3 )^2}\\,\\,( (p{}_1 +p_2) )^\\ga y_{\\ga}\n \\\\ \\nn&&\n+(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n (-) \\ff{ \\gr_4 }{ (1-\\gr_3 )^2}\\,\\,{t}{} ^\\ga y_{\\ga}\n\\\\ \\nn&&\n+(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n (-)\\ff{ \\gr_1 }{(1-\\gr_3)^2} (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\\\\ \\nn&&\n \\\\ \\nn&&\n \n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga} z_\\ga(y+\\tilde{t}{})^{\\ga}\n (-) \\gx_1 \\ff{\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )}\\,\\, {t}{}^\\ga y_{\\ga}\n \\\\ \\nn&&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga} z_\\ga(y+\\tilde{t}{})^{\\ga}\n\\eee\\bee \\nn&&\\times (-) \\gx_1 \\ff{ \\gr_2 }{(1-\\gr_1-\\gr_4 )(1-\\gr_3)( \\gr_1+\\gr_4 ) }\\big(\n y^\\ga+ \\Pz{}^\\ga \\big) {t}{}_{\\ga}\n \\\\ \\nn&&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga} z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\ff{1}{ (1-\\gr_3 )}\\,\\, {t}{}^\\ga y_{\\ga}\\\\ \\nn&&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga} z_\\ga(y+\\tilde{t}{})^{\\ga}\n (-) \\ff{ 1 }{(1-\\gr_3)} (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n \n+ \\eee\\bee \\nn&&\n \n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{\\gr_3}{(1-\\gr_1-\\gr_4 )^2 }\\,\\,( - ( p_3+p_2) )^\\ga y_{\\ga}\n \\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{ \\gr_3}{(1-\\gr_1-\\gr_4 )^2 }\n\\big( ( - ( p_3+p_2) )^\\ga \\big)\\tilde{t}{}_{\\ga}\n\\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n\\\\ \\nn&&\\times \\gx_1 \\ff{\\gr_3\\gr_4}{(1-\\gr_1-\\gr_4 ) (1-\\gr_3 )(\\gr_1+\\gr_4)}\\,\\, {t}{} ^\\ga y_{\\ga}\n \\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{1}{ (1-\\gr_3 )}\\,(p_1 +p_2 )^{\\ga} y_{\\ga}\n \\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{ 1}{ (1-\\gr_3)}(p_1 +p_2 )^{\\ga}\\tilde{t}{}_{\\ga}\n \\eee\\bee\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n\\\\ \\nn&&\\times (-) \\gx_1 \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)} \\ff{\\gr_4}{(\\gr_1+\\gr_4)^2}\n\\big(y^\\ga+ \\Pz{}^\\ga \\big){t}{}_{\\ga}\n \\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n(-) \\ff{ 1}{ (1-\\gr_3 )}\\,(p_1 +p_2 )^{\\ga} y_{\\ga}\n \\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n (-) \\ff{ 1 }{(1-\\gr_3)} (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n + \\eee \\bee\\nn &&\n \n+ \\gx_1\\Big[\n \\ff{ \\gr_2\\gr_2}{(1- \\gr_1 -\\gr_4)^3(1-\\gr_3)^3( \\gr_1+\\gr_4 ) }\n\\big( y+ (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} )+ (1 -\\gr_4 ){t}{} \\big)^\\gga\n\\big( y + \\tilde{t}{} \\big)_{\\gga} z_\\ga {t}{}^{\\ga}\n \\\\ \\nn &&\n-\\ff{ \\gr_3\\gr_2}{(1- \\gr_1 -\\gr_4)^3(1-\\gr_3)^3 }\n {\\big( y + \\tilde{t}{} \\big)^\\gga z_{\\gga} {t}{}^{\\ga} y_{\\ga}}\n \\\\ \\nn &&\n + \\ff{ \\gr_2}{(1- \\gr_1 -\\gr_4)^2(1-\\gr_3)^3 } \\big( y\n + \\tilde{t}{} \\big)^\\gga z_{\\gga} { (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}}\n \\Big]\n \\big(y + \\Pz{} \\big)^\\gb\n (y+\\tilde{t}{})_{\\gb}\\Big\\} \\Ee\\go CCC\\,.\n \\eee\n Terms from the \\rhs of \\eq{SUM=} with $\\gx$-independent pre-exponentials are considered in Section \\ref{NEgx},\n while those with $\\gx_1$-proportional pre-exponentials are considered in Section \\ref{gx}.\n\n\\subsection{ $\\gx_1$-independent pre-exponentials }\n\\label{NEgx}\n Here we consider only pre-exponentials, omitting for brevity integrals, integral measures {\\it etc} of \\eq{SUM=}.\nBy virtue of the Schouten identity taking into account that $\\sum \\gr_i=1$\nEq.~\\eq{SUM=} yields\n\\bee\\nn && Integrand(S_6)\\Big|_{\\mod \\gx}=(y+ \\tilde{t}{} )^{\\gn}z_{\\gn}\\Big\\\n-\n\\ff{\\gr_2(\\gr_1+\\gr_4)}{(1- \\gr_1 -\\gr_4)(1-\\gr_3)^3 }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n \\,\\,( (p{}_1 +p_2) )^\\ga y_{\\ga}\n\\qquad \\\\ \\nn&&\n- \\ff{\\gr_2 \\gr_4 }{(1- \\gr_1 -\\gr_4)(1-\\gr_3)^3 }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n \\,{t}{} ^\\ga y_{\\ga}\n\\\\ \\nn&&\n- \\ff{\\gr_2\\gr_1 }{(1- \\gr_1 -\\gr_4)(1-\\gr_3)^3 }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n \\\\ \\nn&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^3}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga}\n \\,\\, {t}{}^\\ga y_{\\ga}\\eee\\bee \\nn&&\n - \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^3}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga}\n (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n \\\\ \\nn&&\n - \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^3}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga \\,(p_1 +p_2 )^{\\ga} y_{\\ga}\n \\\\\\nn &&\n - \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^3}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga\n (p_1 +p_2 )^{\\ga}{t}{}_{\\ga} \\Big\\}\\Ee\\go CCC \n \n\\nn \\\\\n\\nn &&=(y+ \\tilde{t}{} )^{\\gn}z_{\\gn}\\ff{\\gr_2 }{(1- \\gr_1 -\\gr_4)(1-\\gr_3)^3 }\\Big\\\n \\gr_1{t}{}^\\ga (p_1+p_2 ){} _\\ga\n (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n +\n ( p_1{}+ p_2)^{\\gga} y _{\\gga}\n \\,\\, {t}{}^\\ga y_{\\ga}\n \\\\ \\nn&&\n -\n ( p_1{}+ p_2)^{\\gga} (1 -\\gr_4 ){t}{}_{\\gga}\n (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n \\\\ \\nn&&\n -\n {t}{}^\\gga y _\\gga \\,(p_1 +p_2 )^{\\ga} y_{\\ga}\n - {t}{}^\\gga (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ) _\\gga\n (p_1 +p_2 )^{\\ga}{t}{}_{\\ga} \\Big\\}\\Ee\\go CCC\n \\\\ \\nn\n &&=(y+ \\tilde{t}{} )^{\\gn}z_{\\gn}\\ff{\\gr_2 }{(1- \\gr_1 -\\gr_4)(1-\\gr_3)^3 }\\Big\\\n - \\gr_1{t}{}^\\ga (p_1+p_2 ){} _\\ga\n (p_1{}^{\\ga}+p_2{}^{\\gb}){t}{}_{\\gb}\n \\\\ \\nn&&\n - ( p_1{}+ p_2)^{\\gga} (1 -\\gr_4 ){t}{}_{\\gga}\n (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n -\n {t}{}^\\gga (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ) _\\gga\n (p_1 +p_2 )^{\\ga}{t}{}_{\\ga} \\Big\\}\\Ee\\go CCC\\equiv 0 .\\eee\n\\subsection{ $\\gx_1$-proportional pre-exponentials}\n\\label{gx}\n\\bee\\label{lostcohomo2}&&\n S_6\\, \\Big|_{\\gx_1 }\n = J_7 + i \\ff{ \\eta^2 }{4 }\\int d\\Gamma \\delta(\\xi_3) \\\\ \\nn&&\\Big\\{\n \n(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n\\gx_1 \\ff{1-\\gr_3-\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )^2}\\,\\, \\Pz{}^\\ga y_{\\ga}\n\\\\ \\nn&&\n+(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n \\gx_1 \\ff{1-\\gr_3-\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )^2}\\big(y + \\Pz{} \\big)^\\ga\\tilde{t}{}_{\\ga}\n\\\\ \\nn&&\n-(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n \\gx_1 \\ff{\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )}\\,(p_3{}^{\\ga}+p_2{}^{\\ga}) y_{\\ga}\n\\\\ \\nn&&\n-(y+ \\tilde{t}{} )^{\\gga}z_{\\gga}\\ff{\\gr_2}{(1- \\gr_1 -\\gr_4)(1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n \\gx_1 \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)}(p_3{} +p_2{})^{\\gb}\\tilde{t}{}_{\\gb}\n \\\\ \\nn&&\n - \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga} z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )}\\,\\, {t}{}^\\ga y_{\\ga}\n \\\\ \\nn&&\n - \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga} z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{ \\gr_2 }{(1-\\gr_1-\\gr_4 )(1-\\gr_3)( \\gr_1+\\gr_4 ) }\n \\\\\\nn &&\\times\n \\big( y^\\ga+ \\Pz{}^\\ga \\big) {t}{}_{\\ga}\n \n \\\\ \\ls\\ls\\ls\\nn&&\n \n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{\\gr_3}{(1-\\gr_1-\\gr_4 )^2 }\\,\\,( - ( p_3+p_2) )^\\ga y_{\\ga}\n \\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{ \\gr_3}{(1-\\gr_1-\\gr_4 )^2 }\n\\big( ( - ( p_3+p_2) )^\\ga \\big)\\tilde{t}{}_{\\ga}\n\\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\\\\\nn &&\\times\n \\gx_1 \\ff{\\gr_3\\gr_4}{(1-\\gr_1-\\gr_4 ) (1-\\gr_3 )(\\gr_1+\\gr_4)}\\,\\, {t}{} ^\\ga y_{\\ga}\n \\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{1}{ (1-\\gr_3 )}\\,(p_1 +p_2 )^{\\ga} y_{\\ga}\n \\\\\\nn &&\n + \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\gx_1 \\ff{ 1}{ (1-\\gr_3)}(p_1 +p_2 )^{\\ga}\\tilde{t}{}_{\\ga}\n \\\\\\nn &&\n - \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga z_\\ga(y+\\tilde{t}{})^{\\ga}\n \\\\\\nn &&\\times\n \\gx_1 \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)} \\ff{\\gr_4}{(\\gr_1+\\gr_4)^2}\n\\big(y^\\ga+ \\Pz{}^\\ga \\big){t}{}_{\\ga}\n \n \\\\ \\nn &&\n \n+ \\gx_1\\Big[ \n \\ff{ \\gr_2\\gr_2}{(1- \\gr_1 -\\gr_4)^3(1-\\gr_3)^3( \\gr_1+\\gr_4 ) }\n\\big( y+ (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} )+ (1 -\\gr_4 ){t}{} \\big)^\\gga\n\\big( y + \\tilde{t}{} \\big)_{\\gga}\\\\ \\nn &&\\times\\Big\\{\n{t}{}^{\\ga}\\big(y + \\Pz{} \\big)_\\ga z^\\gs (y+\\tilde{t}{})_{\\gs}\n\\Big\\}\n-\\ff{ \\gr_3\\gr_2}{(1- \\gr_1 -\\gr_4)^3(1-\\gr_3)^3 }\n {\\big( y + \\tilde{t}{} \\big)^\\gga z_{\\gga} {t}{}^{\\ga} y_{\\ga}}\n \\big(y + \\Pz{} \\big)^\\gs\n (y+\\tilde{t}{})_{\\gs} \\\\ \\nn &&\n + \\ff{ \\gr_2}{(1- \\gr_1 -\\gr_4)^2(1-\\gr_3)^3 } \\big( y\n + \\tilde{t}{} \\big)^\\gga z_{\\gga} \\big(y + \\Pz{} \\big)^\\gs\n (y+\\tilde{t}{})_{\\gs} { (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}}\n \\Big]\n\\Big\\}\\Ee\\go CCC\\q\n \\eee\nwhere $J_7$ is the cohomology term \\eq{Result6}\\,.\nThis yields\n\\bee\\label{Endofgx}&&\n S_6\\, \\Big|_{\\gx_1 } \\approx J_7 + i \\ff{ \\eta^2 }{4 }\\int d\\Gamma \\delta(\\xi_3)\n \\ff{\\gr_2 }{(1- \\gr_1 -\\gr_4)^2(1-\\gr_3)^2 }\n \\\\ \\nn&&\\gx_1(y+ \\tilde{t}{} )^{\\gga}z_{\\gga} \\Big\\{\n \\ff{ (1-\\gr_3-\\gr_2)}{ (1-\\gr_3) }{t}{}^\\ga (p_1+p_2 ){} _\\ga\n {\\big(y + \\Pz{} \\big)^\\gb(y+ \\tilde{t}{} )_{\\gb}}\n \\\\ \\nn&&\n- \\gr _2 {t}{}^\\ga (p_1+p_2 ){} _\\ga\n (p_3{}^{\\ga}+p_2{}^{\\gb})(y+ \\tilde{t}{} )_{\\gb}\n - \\ff{\\gr _2}{ (1-\\gr_3) }\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga}\n \\,\\, {t}{}^\\ga y_{\\ga}\n \\\\ \\nn&&\n - \\ff{\\gr _2}{ (1-\\gr_3) ( \\gr_1+\\gr_4 )}\n ( p_1{}+ p_2)^{\\gga} \\big(y + (1 -\\gr_4 ){t}{}\\big)_{\\gga}\n \\big(y+ \\Pz{} \\big)^\\ga {t}{}_{\\ga}\n \\\\ \\nn&&\n - \\ff{ \\gr_3}{(1-\\gr_1-\\gr_4 )^2 }\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga\n ( p_3+p_2)^\\ga (y+ \\tilde{t}{} )_{\\ga}\n\\\\\\nn &&\n + \\ff{ \\gr_3\\gr_4}{ (1-\\gr_3) (\\gr_1+\\gr_4)}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga\n \\, {t}{} ^\\ga y_{\\ga}\n \\\\\\nn &&\n + \\ff{ (1-\\gr_1-\\gr_4 )}{ (1-\\gr_3) }\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga\n (p_1 +p_2 )^{\\ga}(y+ \\tilde{t}{} )_{\\ga}\n \\\\\\nn &&\n - \\ff{ \\gr _2\\gr_4}{ (1-\\gr_3) (\\gr_1+\\gr_4)^2}\n {t}{}^\\gga \\big(y + (1-\\gr_1-\\gr_4 )( p_1{} +p_2{} ){}\\big)_\\gga\n \\big(y^\\ga+ \\Pz{}^\\ga \\big){t}{}_{\\ga}\n \\\\\\nn &&\n +\\ff{ \\gr_2}{(1- \\gr_1 -\\gr_4) (1-\\gr_3) ( \\gr_1+\\gr_4 ) }\n\\big( y+ (1 -\\gr_4 ){t}{} \\big)^\\gga\n\\big( y + \\tilde{t}{} \\big)_{\\gga}\n{t}{}_{\\ga}\\big(y + \\Pz{} \\big)^\\ga\n\\\\ \\nn &&\n +\\ff{ \\gr_2 }{ (1-\\gr_3) ( \\gr_1+\\gr_4 ) }\n ( p_1{} +p_2{} )^\\gga\n\\big( y + \\tilde{t}{} \\big)_{\\gga}\n {t}{}_{\\ga}\\big(y + \\Pz{} \\big)^\\ga\n \\\\ \\nn &&\n-\\ff{ \\gr_3 }{(1- \\gr_1 -\\gr_4) (1-\\gr_3) }\n {t}{}^{\\ga} y_{\\ga}\n \\big(y + \\Pz{} \\big)^\\gs\n (y+\\tilde{t}{})_{\\gs} \\\\ \\nn &&\n + \\ff{ 1}{ (1-\\gr_3) } \\big(y + \\Pz{} \\big)^\\gs\n (y+\\tilde{t}{})_{\\gs} { (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}}\n\\Big\\}\\Ee\\go CCC\\equiv J_7\n \\eee\nsince, using the Schouten identity, one can see that\nthe pre-exponential of the integrand on the \\rhs of \\eq{Endofgx} equals zero.\n\\addtocounter{appendix}{1}\n\\renewcommand{\\theequation}{\\Alph{appendix}.\\arabic{equation}}\n\\addtocounter{section}{1} \\setcounter{equation}{0}\n \\addcontentsline{toc}{section}{\\,\\,\\,\\,\\,\\,\\,Appendix E: Useful formulas}\n \\renewcommand{\\thesubsection}{\\Alph{appendix}.\\arabic{subsection}}\n\n\n\\section*{Appendix E: Useful formulas}\n\\label{AppG}\n\nFrom \\eq{Egx=}\n one has\n \\bee\\label{EEgx14=}&&\\left(\\frac{\\p}{\\p \\rho_1}-\\frac{\\p}{\\p \\rho_4}\\right) E= i\\Big\\{\n \\gx_1 \\ff{\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )}\\,\\, {t}{}^\\ga y_{\\ga}\n\\\\ \\nn &&+ \\gx_1 \\ff{ \\gr_2 }{(1-\\gr_1-\\gr_4 )(1-\\gr_3)( \\gr_1+\\gr_4 ) }\\big( y + \\Pz{}\\big)^\\ga {t}{}_{\\ga}\n \\\\ \\nn &&\n + \\ff{ 1 }{(1-\\gr_3)} (y+p_1{}^{\\ga}+p_2{}^{\\ga}){t}{}_{\\ga}\n \\Big\\}E \\eee\n \\bee\\label{EEgx23}&&\\left(\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_3}\\right) E= i \\Big\\{\n \\gx_1 \\ff{1-\\gr_3-\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )^2}\n \\big( y + \\Pz{}\\big)^\\ga(y+\\tilde{t}{})_{\\ga}\n\\\\ \\nn&& - \\gx_1 \\ff{\\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3 )}\\,(p_3{}^{\\ga}+p_2{}^{\\ga}) (y+\\tilde{t}{})_{\\ga}\n\n - \\ff{ \\gr_1+\\gr_4}{ (1-\\gr_3 )^2}\\,\\,( (p{}_1 +p_2) )^\\ga y_{\\ga}\n\\\\ \\nn && - \\ff{ \\gr_4 }{ (1-\\gr_3 )^2}\\,\\,{t}{} ^\\ga y_{\\ga}\n\n - \\ff{ \\gr_1 }{(1-\\gr_3)^2} (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n \\Big\\}\n E \\,, \\eee\n\\bee\\label{Egx=21e}&&\\left(\\frac{\\p}{\\p \\rho_2}-\\frac{\\p}{\\p \\rho_1}\\right) E = i \\Big\\{\n \\gx_1 \\ff{ -\\gr_3}{(1-\\gr_1-\\gr_4 )^2 }\n ( p_3+p_2 )^\\ga (y+\\tilde{t}{})_{\\ga}\n\\\\ \\nn&&\n + \\gx_1 \\ff{\\gr_3\\gr_4}{(1-\\gr_1-\\gr_4 ) (1-\\gr_3 )(\\gr_1+\\gr_4)}\\,\\, {t}{} ^\\ga y_{\\ga}\n + \\gx_1 \\ff{ 1}{ (1-\\gr_3)}(p_1 +p_2 )^{\\ga}(y+\\tilde{t}{})_{\\ga}\n\\\\ \\nn &&- \\gx_1 \\ff{ \\gr_2}{(1-\\gr_1-\\gr_4 )(1-\\gr_3)} \\ff{\\gr_4}{(\\gr_1+\\gr_4)^2}\n\\big( y + \\Pz{}\\big)^\\ga{t}{}_{\\ga}\n \\\\ \\nn &&\n\\\\ \\nn&&\n- \\ff{ 1}{ (1-\\gr_3 )}\\,(p_1 +p_2 )^{\\ga} y_{\\ga}\n- \\ff{ 1 }{(1-\\gr_3)} (p_1 +p_2 )^{\\ga}{t}{}_{\\ga}\n \\Big\\}\n E \\, . \\eee\n\n \\addcontentsline{toc}{section}{\\,\\,\\,\\,\\,\\,\\,References}\n\n\n\\section*{}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\nNeurons are morphological structures: they have dendritic branches on which most inputs are received and an axonal tree through which the output signal is communicated with other neurons. In this light, neuronal computations can be seen as the integration of synaptic inputs along the dendrites up to the axon initial segment where an output signal is generated. Hence a key role in neuronal computation is taken by the exact shape and composition of dendrites. Indeed, it is known that the neuronal response is shaped by the precise location and activation pattern of synapses \\citep{Branco2010, Torben-Nielsen2010, Gidon2012} and by the expression and distribution of (voltage-gated) ion-channels \\citep{Migliore2002, Magee1999, Torben-Nielsen2010,Spruston2008}. \n\nDespite this proven importance, dendritic processing is usually ignored in network simulation \\citep{Gewaltig2007,Brette2007,Richert2011}, but see \\citep{Markram2006} for an exception. One reason is the computational cost associated with multi-compartmental simulations: a costs that, at the level of the model neuron, scales with the morphological complexity of the dendritic arborization. Related is the conceptual cost associated with building detailed single-neuron models \\citep{Hay2013} with the spatial distribution of conductances across the membrane and localized non-linearities. The key is to capture the somatic voltage in response to synaptic inputs on the dendrites. Is there an alternative to multi-compartmental models to simulate the effects of dendrites on synaptic potentials, without large computational overhead?\n\nTo this end, two strategies are commonly adopted in the literature. The first consist of performing a morphological reduction by reducing the number of dendritic segments while attempting to capture crucial characteristics of dendritic processing \\citep{Traub2005,Kellems2010}. A second strategy is to by-pass multiple (dendritic) compartments altogether by using point-neurons and fit voltage-kernels that matches the dendritic signal transformation shaping the voltage waveform caused by a synaptic input at the soma \\citep{Jolivet2004, Gutig2006}. The fitted \\citep{Jolivet2004} or learned \\citep{Gutig2006} kernel is then simply added to the somatic membrane potential. While this strategy is computationally efficient and some temporal effects of dendritic processing can be captured, it is a rather crude approximation of what dendritic integration stands for and elementary features of dendritic processing, such as local interaction between inputs, are impossible to achieve. \n\nIn this work we present a true alternative based on applying the Green's function formalism to cable theory. This way we can exactly compute the effect of synaptic inputs located in the dendrites on the somatic membrane potential \\citep{Koch1985}. By design we thus compute the linear transfer function between the site of the synaptic inputs and the soma. The main advantage of this approach is that the effect of synaptic inputs along a dendrite on the somatic membrane potential can be calculated analytically. Consequently, simulations in our model are independent of the morphological complexity and a full reduction to a point-neuron can be used, as the entire effect of the morphology is captured in a transfer function. This property sets our approach apart from existing methods to model dendrites implicitly: the approach based on the equivalent cable works only with geometrically tightly constrained morphologies \\citep{Ohme1998}, while, as in \\citep{VanPelt1992} all branch points of a dendritic tree have to be modeled explicitly. Because we capture arbitrary dendritic morphologies by means of transfer functions, our synapse model is able to use dendrite-specific mechanism of computation, such as delay lines (as \\citep{Gutig2006}) but also local non-linearities due to membrane saturation. Hence, we can capture fundamental features of dendritic integration by directly deriving the Green's function from dendritic cable theory. \n\nWe implemented our synapse model in the Python programming language as a proof of principle, and validated it by evaluating its correctness and execution times on two tasks. First, we show that a morphology-less point-neuron equipped with the proposed synapse model can exploit differential dendritic processing to perform an input-order detection task \\citep{Agmon-Snir1998}. We show that both for passive models and models with active currents in the soma, the agreement with a reference \\textsc{neuron} simulation \\citep{Carnevale2006} is seamless. Second, we show that the proposed neuron model is capable of accurate temporal integration of multiple synaptic inputs, a result for which knowledge of the precise neuronal morphology in relation to the synaptic locations is imperative. To this end, we construct a point-neuron model mimicking the dendritic processing in the dendrites of a Layer 5 pyramidal cell. Again, we demonstrate that the agreement with a reference \\textsc{neuron} simulation is seamless. By providing this example, we demonstrate that our proposed approach is highly suitable for the common scenarios to investigate dendritic processing. In such scenarios, the somatic response to a limited number of synapses located in the dendrites is measured while changing the dendritic properties.\n\n\\section{Synapse model based on the Green's function formalism}\\label{sec:methods}\n\nThe core rationale of this work is the simplification of a passive neuron model by analytically computing the transfer function between synapses and the soma. Solving the cable equation for dendrites is not new, and several ways are documented \\citep{Koch1985, Butz1974, Norman1972}. The application of the cable equation to simplify arbitrarily morphologically extended multi-compartmental models to a point-neuron is, however, new.\n\nBy solving the cable equation, we thus substitute the effects of an electrical waveform traveling down a dendrite by a so-called pulse-response kernel. Conceptually, we think of the neural response to a spike input as being characterized by three functions: the conductance profile of the synapse, the pulse-response kernel at the synapse and the pulse-response transfer kernel between the input location and the soma to mimic the actual dendritic propagation. The first function is chosen by the modeller: common examples are the alpha function, the double exponential or the single decaying exponential \\citep{Rotter1999,Giugliano2000,Carnevale2006}. The second function captures the decay of the voltage at the synapse given a pulse input, and thus allows for a computation of the synaptic driving force, whereas the third function allows for the computation of the response at the soma, given the synaptic profile, driving force, and dendritic profile.\n\nMore formally, we write $g(t)$ for the synaptic conductance profile, $G_{\\text{syn}}(t)$ for the pulse response kernel at the synapse and $G_{\\text{som}}(t)$ for the pulse response kernel between synapse and soma. Then, given a presynaptic spiketrain $\\{ t_s \\}$ and a synaptic reversal potential $E_r$, the somatic response of the neuron is characterized by:\n\\begin{eqnarray}\\label{eq:intro}\n\\begin{aligned}\ng(t) & = F(\\mathbf{a}(t)), \\hspace{4mm} \\frac{\\mathrm{d}\\mathbf{a}}{\\mathrm{d}t}(t) = H(\\mathbf{a}(t),\\{t_s\\})\\\\\nV_\\text{syn}(t) & = \\int_{-\\infty}^{t} \\mathrm{d}k \\ G_{\\text{syn}}(t-k) \\ g(k) \\ (V_\\text{syn}(k)-E_r) \\\\\nV_\\text{som}(t) & = \\int_{-\\infty}^{t} \\mathrm{d}k \\ G_{\\text{som}}(t-k) \\ g(k) \\ (V_\\text{syn}(k)-E_r),\n\\end{aligned}\n\\end{eqnarray}\nwhere $E_r$ is the synaptic reversal potential, $F(.)$ and $H(.)$ depend on the type of synapse chosen and $\\mathbf{a}$ denotes the set of synaptic parameters required to generate the conductance profile $g(t)$. Our task is to compute $G_{\\text{syn}}(t)$ and $G_{\\text{som}}(t)$. We will show that these functions follow from the Green's function formalism.\n\n\\subsection{The neuron model in time and frequency domains}\n\n\\subsubsection{Time domain}\n\nHere, we assume a morphological neuron models with passive dendritic segments. Each segment, labeled $d = 1,\\hdots,N$, is modeled as a passive cylinder of constant radius $a_d$ and length $L_d$. It is assumed that all segments have an equal membrane conductance $g_m$, reversal potential $E$, intracellular axial resistance $r_a$ and membrane capacitance $c_m$. By convention we label the locations along a dendrite by $x$, with $x=0$ and $x=L_d$ denoting the proximal and distal end of the dendrite, respectively. Then, in accordance with cable theory, the voltage in a segment $d$ follows from solving the partial differential equation \\citep{Tuckwell1988Introduction}:\n\\begin{eqnarray}\\label{eq:cable}\n\\begin{aligned}\n\\frac{\\pi a_d^2}{r_a}\\frac{\\mathrm{\\partial}^2 V_d}{\\mathrm{\\partial}x^2}(x,t) \\ - \\ 2\\pi a_d g_m V_d(x,t) \\ - 2\\pi a_d c_m \\frac{\\mathrm{\\partial} V_d}{\\mathrm{\\partial}t}(x,t) \\ = \\ I_d(x,t),\n\\end{aligned}\n\\end{eqnarray}\nwhere $I_d(x,t)$ represents the input current in branch $d$, at time $t$ and at location $x$. We assume that the dendritic segments are linked together by boundary conditions that follow from the requirement that the membrane potential is continuous and the longitudinal currents (denoted by $I_{ld}$) conserved:\n\\begin{eqnarray}\n\\begin{aligned}\nV_{d}(L_{d}, t) & = V_{i}(0,t), \\hspace{4mm} i \\in \\mathcal{C}(d) \\\\\nI_{ld}(L_{d}, t) & = \\sum_{i \\in \\mathcal{C}(d)} I_{li}(0,t)\n\\end{aligned}\n\\end{eqnarray}\nwhere $\\mathcal{C}(d)$ denotes the set of all child segments of segment $d$. The longitudonal currents are given by:\n\\begin{equation}\nI_{ld}(x,t) = \\frac{\\pi a_d^2}{r_a}\\frac{\\mathrm{\\partial} V_d}{\\mathrm{\\partial}x}(x,t). \n\\end{equation}\nDifferent dendritic branches originating at the soma are joined together by the lumped-soma boundary condition, which implies for the somatic voltage $V_{\\text{som}}(t)$:\n\\begin{equation} \\label{eq:lsb1}\nV_{\\text{som}}(t) = V_d(0,t) \\hspace{3mm} \\forall d \\in \\mathcal{C}(\\text{soma})\n\\end{equation}\nand\n\\begin{equation}\\label{eq:lsb2}\n\\sum_{d=1}^{\\mathcal{C}(\\text{soma})} I_{ld}(0,t) = I_{\\text{som}}(V_{\\text{som}}(t)) + C_{\\text{som}} \\frac{\\mathrm{\\partial} V_{\\text{som}}}{\\mathrm{\\partial}t}(t),\n\\end{equation}\nwith $I_{\\text{som}}$ denoting the transmembrane currents in the soma, that can be either passive or active. Note that, for all further calculations, we will treat $I_{\\text{som}}(V_{\\text{som}}(t))$ as an external input current, and apply the Green's function formalism only on a soma with a capacitive current.\nFor segments that have no children (i.e., the leafs of the tree structure), the sealed end boundary condition is used at the distal end:\n\\begin{equation}\nI_{ld}(L_d,t) = 0 \\hspace{3mm} \\forall d.\n\\end{equation}\n\n\\subsubsection{Frequency domain}\n\nFourrier-transforming this system of equations allows for the time-derivatives to be written as complex multiplications, for which analytic \\citep{Butz1974} or semi-analytic \\citep{Koch1985} solutions can be computed. Doing so transforms equation \\eqref{eq:cable} into:\n\\begin{equation}\\label{eq:freqcable}\n\\frac{\\mathrm{\\partial}^2 V_d}{\\mathrm{\\partial}x^2}(0,\\omega) - \\gamma _d(\\omega)^2 V_d(x,\\omega) = I_d(x,\\omega)\n\\end{equation}\nwhere $\\omega$ is now a complex number and $\\gamma_d(\\omega)$ is the frequency-dependent space constant, given by\n\\begin{equation}\n\\gamma_d(\\omega) = \\sqrt{\\frac{z_{ad}}{z_{md}(\\omega)}}\n\\end{equation}\nwith $z_{ad} = \\frac{r_a}{\\pi a_d^2}$ the dendritic axial impedance and $z_{md} = \\frac{1}{2 \\pi a_d (i c_m \\omega + g_m)}$ the membrane impedance in branch $d$. The lumped soma boundary conditions \\eqref{eq:lsb1} and \\eqref{eq:lsb2} become\n\\begin{equation} \nV_{\\text{som}}(\\omega) = V_d(0,\\omega) \\hspace{3mm} \\forall d\n\\end{equation}\nand\n\\begin{equation}\n\\sum_{d=1}^N I_{ld}(0,\\omega) = \\sum_{d=1}^N \\frac{1}{z_{ad}}\\frac{\\mathrm{\\partial} V_d}{\\mathrm{\\partial}x}(0,\\omega) = \\frac{1}{Z_{\\text{som}}(\\omega)}V_{\\text{som}}(\\omega),\n\\end{equation}\nwhere\n\\begin{equation}\nZ_{\\text{som}}(\\omega) = \\frac{1}{i C_{\\text{som}} \\omega}\n\\end{equation}\nis the somatic impedance. The sealed-end boundary conditions are:\n\\begin{equation}\\label{eq:bcfreq}\nI_{ld}(L_d,\\omega) = \\frac{1}{Z_L} V_d(L_d,\\omega) = 0\n\\end{equation} \nwith sealed-end impedance $Z_L = \\infty$.\n\n\\subsection{Morphological simplification by applying Green's function}\nHere we will describe the Green's function formalism formally in the time domain to explain the main principles. In the next paragraph we will then turn back to the frequency-domain to compute the actual solution. For the argument we consider a general current input $I_d(x,t)$. In the case of dynamic synapses, such a current input is obtained from the synaptic conductances by the Ohmic relation:\n\\begin{equation}\\label{eq:current}\nI_{d}(x,t) = g(t)(E_r-V_d(x,t))\n\\end{equation}\nor, in the case of active channels, from the ion channel dynamics\nThe cable equation \\eqref{eq:cable} can be written formally as: \n\\begin{equation}\\label{eq:operator}\n\\hat{L}_d V_d(x,t) = I_d(x,t)\n\\end{equation}\nwhere $\\hat{L}_d = \\frac{\\pi a_d^2}{r_a}\\frac{\\mathrm{\\partial}^2 }{\\mathrm{\\partial}x^2} - 2\\pi a_d g_m - 2\\pi a_d c_m \\frac{\\mathrm{\\partial}}{\\mathrm{\\partial}t}$ is a linear operator\\footnote{Note that formally, the operator $\\hat{L}_d$ depends on $x$ explicitly in a discontinuous way: for for $0x_i$ follows from interchanging $x$ and $x_i$ in \\eqref{eq:gf}.\nTo compute the effect of a synaptic input on the driving force in other branches (denoted by $d'$), we first use equation \\eqref{eq:gf} (corresponding to rule III of \\citep{Koch1985}) to obtain the pulse-voltage response in the frequency domain at the soma. Then, to compute the pulse voltage response in the branch where the driving force needs to be known, we use the following identity:\n\\begin{equation}\nG_{dd'}(x,x',\\omega) = \\frac{G_{d'd'}(x,0,\\omega) G_{dd}(0,x',\\omega)}{G(\\text{soma}, \\text{soma}, \\omega)},\n\\end{equation}\ncorresponding to rule IV of \\citep{Koch1985}.\n\n\n\\subsubsection{Transforming the Green's function to the time domain}\n\nGiven the conventions we assumed when transforming the original equation, the inverse Fourier transform has following form:\n\\begin{equation}\\label{eq:transint}\nG(x,x_i,t) = \\frac{1}{2\\pi} \\int_{-\\infty}^{\\infty}\\mathrm{d}\\omega \\ G(x,x_i,\\omega) \\ e^{i \\omega t}.\n\\end{equation}\nIf the Green's function in the time-domain rises continuously from zero, which is generally the case if $x \\neq x_i$, it can be approximated with negligible error by the standard technique for evaluating Fourier integrals with the fast-Fourier transform (FFT) algorithm \\citep{Press2007Numerical}: we choose a sufficiently large interval $[-\\omega_m,\\omega_m]$ (where $G(x,x_i,\\pm \\omega_m)$ is practically 0), divide it in $M=2^n$ pieces of with $\\Delta \\omega = \\frac{2\\omega_m}{M}$ and approximate the integral by a discrete sum:\n\\begin{equation}\\label{eq:tranform}\nG(x,x_i,t) = \\frac{1}{2\\pi} \\sum_{j=0}^{M-1}G(x,x_i,\\omega_j)e^{i\\omega_j t},\n\\end{equation}\nwhere $\\omega_j = -\\omega_m + j \\Delta \\omega$. The choice of discretization step then fixes the timestep $\\Delta t = \\frac{2\\pi}{M \\Delta \\omega}$. Upon evaluating the Green's function in the time-domain at $t_l = l \\Delta t, \\ l=0,\\hdots,\\frac{M}{2}-1$, expression \\eqref{eq:tranform} can be written in a form that is suitable for the fast Fourier transform algorithm:\n\\begin{equation}\nG(x,x_i,t_l) = \\frac{\\Delta \\omega}{2\\pi} e^{-i \\omega_m t_l} \\sum_{j=0}^{M-1}G(x,x_i,\\omega_j)e^{i\\frac{2\\pi}{M}jl},\n\\end{equation}\nand hence:\n\\begin{equation}\nG(x,x_i,t_l) = \\frac{M \\Delta \\omega}{2\\pi} e^{-i \\omega_m t_l} \\text{FFT}(G(x,x_i,\\omega_j))_l\n\\end{equation}\nThe situation is different if we consider the Green's function at the input location ($x = x_i$). There, the function rises discontinuously from zero at $t=0$, which causes the spectrum in the frequency-domain to have non-vanishing values at arbitrary high frequencies. Hence, the effect of integrating over a finite interval $[-\\omega_m,\\omega_m]$ will be non-negligible. Formally, this truncation can be interpreted as multiplying the original function with a window function $H(\\omega)$ that is 1 in the interval $[-\\omega_m,\\omega_m]$ and 0 elsewhere, resulting in a time-domain function that is a convolution of the real function and the transform of the window:\n\\begin{equation}\n\\begin{aligned}\n& \\tilde{G}(\\omega) = G(x_i,x_i,\\omega)H(\\omega) \\hspace{4mm} \\\\\n& \\hspace{4mm} \\Longrightarrow \\hspace{4mm} \\tilde{G}(t) = \\int_{-\\infty}^{\\infty} G(x_i,x_i,\\tau)H(t-\\tau).\n\\end{aligned}\n\\end{equation}\nFor the rectangular window, the transform $H(t)$ has significant amplitude components for $t\\neq 0$, an unwanted property that will cause the Green's function to have spurious oscillations, a phenomenon that is known as spectral leakage \\citep{Blackman1958}. This problem can be solved by chosing a different window function, which is 1 at the center of the spectrum and drops continuously to zero at $-\\omega_m$ and $\\omega_m$. For this work we found that the Hanning window,\n\\begin{equation}\nH(\\omega) = \\frac{1}{2}\\left(1+\\cos \\left( \\frac{\\pi \\omega}{\\omega_m} \\right) \\right),\n\\end{equation} \ngave accurate results for $t\\neq 0$. For $t=0$, the amplitude is slightly underestimated as a consequence of the truncation of the spectrum, whereas for $t$ very close to, but larger than $0$, the amplitude is slightly overestimated. However, these errors only cause discrepancy in a very small window ($<\\unit[0.1]{ms}$) and thus have negligible effect on the neural dynamics.\n\n\\section{Model implementation \\& Validation}\n\n\\subsection{Synapse model implementation}\n\nWe implemented a prototype of the synapse model discussed above in two stages. First, after specifying the morphology and the synapse locations, the Green's Function is evaluated at the locations that are needed to solve the system, thus yielding a set of pulse response kernels. As modern high-level languages can handle vectorization very efficiently, these functions can be evaluated for a large set of frequencies $\\omega$ quickly, thus allowing for great accuracy. Second, we implemented a model neuron that uses these Green's functions, sampled at the desired temporal accuracy. Then, given a set of synaptic parameters, the somatic membrane potential is computed by integrating the Volterra-equations \\eqref{eq:greenssynapse} and \\eqref{eq:greenssoma} \\citep{Press2007Numerical}.\n\n\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|l|l|l|l|l|}\n\\hline \n\\multicolumn{5}{|c|}{Physiology} \\\\\n\\hline \n$C_m$ & \\multicolumn{4}{c|}{\\unit[1]{$\\mu F\/cm^2$}} \\\\\n$g_m$ & \\multicolumn{4}{c|}{\\unit[0.02]{$mS\/cm^2$}} \\\\\n$r_a$ & \\multicolumn{4}{c|}{\\unit[100]{$\\Omega cm$}} \\\\\n$E_l$ & \\multicolumn{4}{c|}{\\unit[-65]{mV}} \\\\\n\\hline\n\\multicolumn{5}{|c|}{Morphology} \\\\\n\\hline\nSoma length & \\multicolumn{4}{c|}{\\unit[25]{$\\mu m$}} \\\\\nSoma diam & \\multicolumn{4}{c|}{\\unit[25]{$\\mu m$}} \\\\\n\\hline\n& \\multicolumn{2}{|c|}{Fig~\\ref{fig:input_order}B} & \\multicolumn{2}{c|}{Fig~\\ref{fig:input_order}C} \\\\\n\\hline\n& dend 1 & dend 2 & dend 1 & dend 2 \\\\ \\cline{2-5}\n$L_d$ & \\unit[950]{$\\mu$m} & \\unit[450]{$\\mu$m} & \\unit[900]{$\\mu$m} & \\unit[500]{$\\mu$m} \\\\\n$a_d$ & \\unit[0.25]{$\\mu$m} & \\unit[0.5]{$\\mu$m} & \\unit[0.5]{$\\mu$m} & \\unit[1]{$\\mu$m}\\\\\n\\hline\n\\multicolumn{5}{|c|}{Synapses} \\\\\n\\hline\n& syn 1 & syn 2 & syn 1 & syn 2 \\\\ \\cline{2-5}\n$E_r$ & \\unit[0]{mV} & \\unit[0]{mV} & \\unit[0]{mV} & \\unit[0]{mV} \\\\\n$\\tau$ & \\unit[1.5]{ms} & \\unit[1.5]{ms} & \\unit[1.5]{ms} & \\unit[1.5]{ms} \\\\\n$\\overline{g}$ & \\unit[5]{nS} & \\unit[2]{nS} & \\unit[20]{nS} & \\unit[9]{nS} \\\\\n\\hline\n\\end{tabular}\n\\caption{Model neuron parameters. The multi-compartmental model explicitly simulates the dendritic structure, while the point-neuron is equipped with our model synapse based on Green's functions and implicitly simulates the dendritic structure.}\n\\label{table:parameters}\n\\end{table}\n\n\n\\subsection{Multi-compartmental and point-neuron model}\n\nTo compare the performance between a multi-compartmental model and a point-neuron model using the proposed synapse model, we created two comparable neuron models. In the multi-compartmental model, the dendrites are modeled explicitly using \\textsc{neuron} \\citep{Carnevale2006}, while in the point-neuron model the dendrites are omitted and dendritic processing is carried out implicitly by the new synapse model. The properties of both model neurons are listed in Table~\\ref{table:parameters}. Evidently, the implicit model has no real morphology and the parameters related to the geometry are used to instantiate the synapse model.\n\n\\subsection{Input-order detection with differential dendritic filtering}\\label{sec:iodetect}\n\n\\begin{figure*}[htb!]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{fig1_.pdf} \n \\caption{Comparison between a reference multi-compartmental model and a point-neuron model equipped with the new synapse model implicetely simulating dendritic processing. A: Both model neurons performed the input-order detection task: The neuron has to respond as strong as possible to the temporal activation 1 $\\rightarrow$ 2 and as weak as possible to the reverse temporal order. B: The input-order dectection task for a completely passive neuron. Left and right panels contain the somatic membrane potential when the synapses were activated in the preferred ($1 \\rightarrow 2$) and null ($2 \\rightarrow 1$) temporal order respectively. Colored lines represent the voltage in the point-neuron model and the black dashed line depicts the \\textsc{neuron} trace for comparison. As a reference the waveform when only the first synapse is activated is also shown (left: 1 and right: 2). Vertical dashed-dotted lines denote the spikes arriving at synapse 1 and 2 (left) or 2 and 1 (right). (C) Same as (B), but now the soma contained active HH-currents. \n }\n \\label{fig:input_order}\n\\end{figure*}\n\nTo show the applicability of the new type of model synapse, we use it to perform input-order detection: Suppose a neuron with two dendrites and one synapse (or one group of synapses) on either dendrite (shown in figure~\\ref{fig:input_order}A). In the input-order task, the neuron has to generate a strong response to the temporal activation of the synapses $1 \\rightarrow 2$, while generating a weak response to the reversed temporal activation $2 \\rightarrow 1$. This behavior is achieved by differential dendritic filtering and can thus not be achieved in a straight-forward way by a single-compartmental model. \n\nWe compared the implicit point-neuron model equipped with the new synapse model to the explicit multi-compartmental model in the input-order detection task. The results are illustrated in figure~\\ref{fig:input_order}B. Somatic membrane voltages are shown for the point-neuron model and the multi-compartmental model, after synapse activation in the preferred (left) and null temporal order (right). Because the traces are nearly identical, this result validates our approach and the implementation of the synapse model based on the Green's function solution to the cable theory. \n\n\\subsection{Voltage-gated active currents}\n\nThe most prominent non-linear neuronal response is the action potential. Since it is possible in our synapse model to include any non-linear conductance mechanism, as long as it is spatially restricted to a point-like location, we built a prototype containing the $\\text{Na}^+$ and $\\text{K}^+$ conductances required to generated action potentials. By computing the kernels needed to run the upgraded point-neuron model in the input-order detection task and by adjusting the synaptic weights, we yielded a point-neuron model able to generate a spike in response to the preferred activation pattern, while remaining silent in response to the reversed temporal activation. Note that the active somatic currents shorten the timescale of the neuron's response compared to the passive model. The timscale of the t-axis was scaled accordingly. In order to validate these outcomes, we again built an equivalent multi-compartmental model in \\textsc{neuron} in which we inserted the same $\\text{Na}^+$ and $\\text{K}^+$ conductances into the soma. The multi-compartmental model generated identical results, as shown in Figure~\\ref{fig:input_order}C. Thus, in principle we can include conductance descriptions to obtain hallmark neuronal non-linearities.\n\n\\subsection{Multiple synapse interactions}\n\n\\begin{figure*}[htb!]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{fig2_.pdf}\n \\caption{Comparison between the ``implicit'' (red lines) and ``explicit'' (black lines) model neurons of a pyramidal cell stimulated by Poisson spiketrains. A: The neuron morphology together with the synapse locations. B: The membrane potential traces at the soma, for the input locations shown in panel A (red dots). C: Comparison of the runtime versus the number of input locations. For few input locations, our prototype python code outperforms the \\textsc{neuron} code.}\n \\label{fig:spiketrain}\n\\end{figure*}\n\nWe then checked the correctness of the integrative properties of our implicit point-neuron model by stimulating it with realistic spiketrains at multiple synapses. To that end we added five synapses to a model of a Layer 5 pyramidal neuron equipped with a experimentally reconstructed morphology. The morphology wad retrieved from the NeuroMorpho.org repository \\citep{Ascoli2007} and originally published in \\citep{Wang2002}. We stimulated each synapse with Poisson spike trains of rate \\unit[10]{Hz}. The result is shown in Figure~\\ref{fig:spiketrain}. Again, we compared the implicit model's membrane potential traces to the traces obtained from a multi-compartmental model. The agreement is excellent, as can be seen in Figure~\\ref{fig:spiketrain}B, which also validates our approach when processing inputs from multiple, interacting synapses. \n\n\\subsection{Runtime}\nWe established that the ``implicit'' model neuron equipped with our new synapse model generated near-identical voltage traces as a reference multi-compartmental model. Next we compared the run-time of our implementation to the gold standard in multi-compartmental modeling, the \\textsc{neuron} software \\citep{Carnevale2006}. To this end we simulated a detailed multi-compartmental model (Figure~\\ref{fig:spiketrain}) in \\textsc{neuron} as well as with our approach, for increasing numbers of input locations. For each of those numbers we ran three simulations of 1 second of simulated time at an integration step of 0.1 ms (10 kHz). Because in our approach the execution time is independent of the morphological complexity but rather scales with the number of input locations, it is expected that for a low number of input locations, applying our model will be much faster. As shown in Figure~\\ref{fig:spiketrain} C, for two input locations, our approach runs 20 times faster than \\textsc{neuron}, while at 13 input locations the execution time is equal. Keeping in mind \\textit{i}) that our implementation is done in Python, and \\textit{ii}) that often synapses can be grouped together \\citep{Pissadaki2010} we consider this a good outcome.\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nWe presented a bridge between single-compartment and multi-compartmental neuron models by creating a synapse model that analytically computes the dendritic processing between the synaptic input locations and the soma. We then demonstrated that point-neuron models equipped with this new synapse model could flawlessly perform the input-order detection computation; a neuronal computation exploiting differential dendritic processing \\citep{Agmon-Snir1998}. Thus, the new synapse model can be used to introduce computations to point-neurons that previously only belonged to the realm of multi-compartmental neuron models, with a computational cost that does not depend on the morphological complexity. \n\nThen the question arises when it would be advisable to use our synapse model over the standard tools. Although a quantitative comparison should be treated with care due to the different implementation languages, we still found that our Python-prototype was much faster than the optimized, C++-based \\textsc{neuron}-simulation when the number of input locations was low. This, together with the fact that the computational cost of our model does not depend on morphological complexity, then defines the use case for our model. In scenarios where the number of input locations is low, as is the case in some (invertebrate) cells \\citep{Bullock1965} and as in many \\emph{in-silico} scenarios, only few Volterra equations have to be integrated. There our model represents considerable computational advantage. This arguments also holds when more complex neuron types are considered: while cortical neurons receive often as many as 10000 synapses, many of those can be grouped together. To a good approximation, small dendritic branches act as single units, both in terms of short-term input integration \\citep{Poirazi2003, London2005} as in terms of long-term plasticity related processes \\citep{Govindarajan2011}. Thus, one could group all synapses in a small branch together and then compute the Green's function for that group of synapses as a hole. Such a grouping would drastically reduce the number of Volterra equations to be integrated and hence enhance performance accordingly.\n\n\nWe assumed that the PSP waveform is transformed only in a passive manner on its way to the soma. In reality, this might sound like a drastic simplification as non-linearity is often cited as a hallmark of neuronal computation, not in the least to generate output spikes. How can we evaluate our synapse model in the light of non-linear computations?\n\nNon-linearities in neural response can occur in two ways. First, at the synapse level a non-linear response can be generated principally through the recruitment of NMDA receptors during repetitive synaptic activation \\citep{Branco2010}. As we assume the evolution in time of the synaptic conductance to be of a known shape, we could -in principle- also mimic a non-linear synaptic conductance by using a more specific description of the synaptic conductance evolution.\n\nSecond, non-linearities can arise from voltage-gated conductances in neuronal membranes, that are often distributed non-uniformly along the dendrite \\citep{Larkum1999, Angelo2007, Mathews2010}. The distributed nature of voltage-gated conductances leads to the view that dendritic processing is non-linear, and shaped by these conductances and their spatial distributions. Recent work actually challenges this view as it is known that in some behavioral regimes, dendrites act linearly \\citep{Ulrich2002, Schoen2012}. Since our Green's function approach relies only on the assumption of linearity, it is not intrinsically restricted to passive dendrites. Ion channels distributed along a dendrite can be linearized \\citep{Mauro1970}, and thus yield a quasi-active cable \\citep{Koch1998}. We anticipate that such a linearization procedure can be plugged into our synapse model, so that the linear (but active) properties of the membrane are captured in the Green's function, yielding accurate and efficient simulations of dendrites that reside in their linear regime. Also, in some cases the actual distribution of voltage-gated conductances along the dendrite does not seem to have any effect as long as the time constant for activation is slower than the spread of voltage itself, which makes the actual location of the voltage-gated conductance irrelevant \\citep{Angelo2007}. Thus, in those cases were the spread of voltage is faster than the activation of the conductance, dendrites can act in a passive way, as long as the appropriate non-linearity is introduced at one or a few point-like locations. This can be introduced easily in our synapse model (see Figure~\\ref{fig:input_order}C, with the soma as point-like location with active currents).\n\nWhile dealing with neuronal non-linearities the focus is often on supra-linear responses to inputs, despite the fact that sub-linear responses are also intrinsically non-linear. Moreover, recently it has been shown both in theory and experiment that sub-linear response are used by neurons \\citep{Vervaeke2012,Abrahamsson2012}. Even in passive dendrites, sub-linear responses can be generated when the dendrite locally saturates: due to high input resistance the local voltage response to an input can reach the reversal potential of the membrane. At that moment the driving force disappears and a sub-linear response is generated to inputs. This sort of sub-linear response can be generated in conductance-based models with realistic morphologies. Because we implicitly model dendritic morphology, our synapse model is capable of generating these sub-linear responses.\n\nIn conclusion, we presented a new synapse model that computes the PSP waveforms as if they were subject to dendritic processing without the need to explicitly simulate the dendrites themselves. With this synapse model comes the ability to simulate dendritic processing at a low computational complexity, that allows it's incorporation in large scale models of neural networks. We thus made a first step to bridge single and multi-compartmental modeling.\n\n\\subsubsection*{Acknowledgements}\nWe thank Marc-Oliver Gewaltig for comments on the manuscript and Moritz Deger for helpful discussion. This work was supported by the BrainScaleS EU FET-proactive FP7 grant.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \nA key step in characterising the behaviour of a system is the identification of the relevant degrees of freedom. This is exemplified by Landau's theory of the Fermi liquid \\cite{landau1957,landau1959}, which offers a general description of metallic states in terms of weakly interacting fermions, degrees of freedom obeying the canonical anti-commutation relations\n\\begin{equation}\\label{can_f}\n\\{{\\bm c}_{\\sigma},{\\bm c}^\\dagger_{\\sigma'}\\}=\\delta_{\\sigma\\s'}.\n\\end{equation}\nThese account not just for the long-wavelength phenomenology, but also the electronic band structure, and underlie powerful techniques such as density functional theory which provide a detailed description of a wide variety of materials \\cite{gross2013density}. \n\n\nSome of the most interesting materials have however resisted a description within this framework.\nChief among these are the cuprates, whose puzzling behaviour has provided the central challenge in the field of condensed matter for three decades \\cite{BednorzMuller86,ANDERSON_1987,Keimer_rev}. \nBeyond having some of the highest known superconducting transition temperatures, they exhibit a Mott transition, a pseudogap regime displaying a landscape of intertwined orders \\cite{Keimer_rev,Fradkin_2015}, and a strange metal regime which appears to defy a quasi-particle description \\cite{marginalFL}. Other notable examples include iron pnictides and chalcogenides \\cite{Si_2016}, heavy-fermion compounds \\cite{Gegenwart_2008}, and organic charge-transfer salts \\cite{Powell_2011}.\n\n An important question is whether canonical degrees of freedom, bosons and fermions, are sufficient to account for such behaviour \\cite{ANDERSON_1987}.\nA quantum degree of freedom is specified by the algebra it obeys, which for bosons and fermions has a schematic form $[{\\bm a},{\\bm a}]\\sim1$. \n Here we argue that strongly correlated electrons are instead governed by degrees of freedom which obey a non-canonical Lie algebra, i.e.~an algebra of the form $[{\\bm a},{\\bm a}]\\sim{\\bm a}$. The bracket again reduces the order of operators, but by one, as opposed to two in the canonical case. \n The challenge then is to control the growth of correlations generated by the Hamiltonian through $[{\\bm H},{\\bm a}]$. \n \n \n \nIn one dimension it is well understood how algebraic structures govern the behaviour of correlated electrons, through the formalism of algebraic Bethe ansatz \\cite{Faddeev_2016,EKS,Hbook,HS1}. \nThis is specialised to one dimension however, owing to enhanced symmetries resulting from the constrained geometry \\cite{ZAMOLODCHIKOV1979253}. \nNumerous efforts have been made to exploit Lie algebraic structures in higher dimensions \\cite{Wiegmann_1988,Forster_1989,Chaichian_1991,Kochetov_1996,Coleman_2002,Anderson08,Avella_2011,Ramires}, most specifically through the formalism of Hubbard operators \\cite{Hubbard4,Vedyaev_1984,RuckensteinSR,Izyumov_1990,Ovchinnikov_2004,Izyumov_2005,PhysRevB.70.205112}, but a controlled theoretical framework has so far remained elusive. \n A significant advancement has however recently been made by Shastry \\cite{Shastry_2011,Shastry_2013}, who has developed a perturbative scheme for gaining control over certain non-canonical degrees of freedom, assuming there exists a suitable expansion parameter.\n \n\nIn this work we readdress the question of how to characterise the behaviour of interacting electrons. As the electron has an inherent fermionic nature, we argue that graded Lie algebras provide the natural language for the task. We consider the two such algebras relevant for the electronic degree of freedom: $\\alg{su}(1|1)\\otimes\\alg{su}(1|1)$ and ${\\su(2|2)}$.\nThe first is the algebra of canonical fermions, Eq.~\\eqref{can_f}, which underlies the Fermi liquid description of interacting electrons. The second is closely related to the algebra of Hubbard operators, and we will exploit it to obtain a distinct controlled description of interacting electrons. In particular, we will consider an exceptional central extension of ${\\su(2|2)}$, introduced by Beisert \\cite{Beisert07,Beisert08}, which naturally provides a parameter for the use of Shastry's perturbative scheme. \n\n\n\n\nWe focus on the simplest setting where the novel features of this new controlled description can clearly be seen. \nWe will not attempt to explicitly model any given system, but instead frame our discussion around two overarching themes: the Luttinger sum rule and the Mott metal-insulator transition. \n\nThe Luttinger sum rule states that the volume of the region enclosed by the Fermi surface is directly proportional to the electron density, and independent of interactions.\nIt is proven to be valid for a Fermi liquid in the sense of Landau \\cite{Luttinger_1960}, but there is strong evidence that it is violated in certain strongly correlated systems, such as the cuprates in the pseudogap regime \\cite{Doiron_Leyraud_2007,Badoux_2016}. We explicitly demonstrate that $\\alg{su}(2|2)$ degrees of freedom account for a violation of the Luttinger sum rule, and thus characterise an electronic state of matter which is not a Fermi liquid. \n\nA Mott metal-insulator transition occurs when electronic correlations induce the opening of a gap within an electronic band, signifying a failure of band theory. \nThis phenomenon has played a pivotal role in the study of strongly correlated electrons, but remains incompletely understood \\cite{Imada_1998,LNWrev}. \nIt directly conflicts with the Luttinger sum rule, which implies that a partially filled band has a non-trivial Fermi surface and so is metallic. \nA controlled description consistent with Fermi liquid behaviour is however provided by dynamical mean-field theory \\cite{Metzner_1989,DMFT}, which is exact in the limit of infinite dimensions. \nHere the localisation of electronic quasi-particles is driven by the divergence of their effective mass, as previously described by Brinkman--Rice \\cite{PhysRevB.2.4302}. In contrast, we demonstrate that $\\alg{su}(2|2)$ degrees of freedom result in a splitting in two of the electronic band, each carrying a fraction of the electron's spectral weight. These bands violate the Luttinger sum rule, and a Mott transition naturally occurs when the two bands separate. In the language of the seminal review \\cite{Imada_1998}, this can be understood as a carrier-number-vanishing transition as opposed to a mass-diverging transition. \n We thus offer a controlled framework for characterising Mott transitions in materials, such as the cuprates, where\nthe carrier number vanishes as the transition is approached \\cite{Ando1,Ando2}.\n\n\nThe paper is structured as follows. In Sec.~\\ref{sec:dof} we consider a general lattice model of interacting electrons, and demonstrate that it can be expressed through the generators of either $\\alg{su}(1|1)\\otimes\\alg{su}(1|1)$ or ${\\su(2|2)}$. We interpret these as two ways to characterise the electronic degree of freedom. In Sec.~\\ref{sec:GF} we derive a controlled framework for organising the growth of correlations in the ${\\su(2|2)}$ regime. That is, we obtain a series of successive approximations for the electronic Green's function, which mirrors the self-energy expansion for the canonical regime. In Sec.~\\ref{sec:approx} we examine the leading approximation and find that it captures a splitting of the electronic band. We demonstrate that the Luttinger sum rule is violated, and we observe a Mott transition of carrier-number-vanishing type. Section~\\ref{sec:disc} is a discussion, where we provide further context to our results and offer some perspectives. We conclude in Sec.~\\ref{sec:conc}.\n\n\nThere are five appendices: \\ref{app:su22} reviews the graded Lie algebra ${\\su(2|2)}$, \\ref{app:params} provides explicit expressions for constants and parameters, \\ref{app:canGF} reviews the Green's function analysis for the case of a canonical fermion, \\ref{app:sch} presents a schematic overview of the Green's function analysis for non-canonical ${\\su(2|2)}$, and \\ref{app:2nd} contains the second order contributions to the ${\\su(2|2)}$ self-energy and adaptive spectral weight. \n\n\n\n\n\\section{Electronic degrees of freedom}\\label{sec:dof}\n\nWe wish to address the question of how to characterise behaviour resulting from electronic correlations. Let us consider a lattice with four states per site\n\\begin{equation}\\label{4states}\n\\ket{{\\mathlarger{\\mathlarger{\\circ}}}}=\\ket{0},~~~\\ket{\\downarrow}={\\bm c}^\\dagger_{{\\mathsmaller{\\downarrow}}}\\ket{0},~~~\\ket{\\uparrow}={\\bm c}^\\dagger_{{\\mathsmaller{\\uparrow}}}\\ket{0},~~~\\ket{{\\mathlarger{\\mathlarger{\\bullet}}}}={\\bm c}^\\dagger_{{\\mathsmaller{\\downarrow}}} {\\bm c}^\\dagger_{{\\mathsmaller{\\uparrow}}}\\ket{0},\n\\end{equation} \nwhich provides the Hilbert space for a single-orbital tight-binding model. We disregard disorder and lattice vibrations, focusing solely on electronic interactions. The simpler case of just the two states $\\{\\ket{\\downarrow},\\ket{\\uparrow}\\}$ at each site is relatively well understood in terms of the spin degree of freedom, governed by the Lie algebra $\\alg{su}(2)$ \\cite{Holstein_1940,Dyson_1956}. The complication in the present case is the fermionic nature of the electron, which induces a graded structure between $\\{\\ket{\\downarrow},\\ket{\\uparrow}\\}$ and $\\{\\ket{{\\mathlarger{\\mathlarger{\\circ}}}},\\ket{{\\mathlarger{\\mathlarger{\\bullet}}}}\\}$.\n\n\n\n\nFor concreteness we focus on a Hamiltonian which encompasses both the Hubbard and {$t$-$J$} models,\n\\begin{equation}\\label{eq:ham}\n{\\bm H}=\\sum_{\\braket{i,j}}{\\bm T}_{ij} + J \\sum_{\\braket{i,j}} \\vec{{\\bm s}}_i\\cdot \\vec{{\\bm s}}_j+U \\sum_i {\\bm V}^H_i -2\\mu\\sum_i \\mathlarger{\\bm \\eta}_i^z,\n\\end{equation}\non a $d$-dimensional hypercubic lattice.\nThe Heisenberg spin interaction is expressed through the local spin operators \n\\begin{equation}\\label{eq:spin}\n{\\bm s}^z=\\frac{1}{2}({\\bm n}_{{\\mathsmaller{\\uparrow}}}-{\\bm n}_{{\\mathsmaller{\\downarrow}}}),~~{\\bm s}^+={\\bm c}^\\dagger_{{\\mathsmaller{\\uparrow}}} {\\bm c}_{{\\mathsmaller{\\downarrow}}},~~{\\bm s}^-={\\bm c}^\\dagger_{{\\mathsmaller{\\downarrow}}} {\\bm c}_{{\\mathsmaller{\\uparrow}}},\n\\end{equation}\nwhich obey $ [{\\bm s}^z,{\\bm s}^\\pm]=\\pm{\\bm s}^\\pm$ and $[{\\bm s}^+,{\\bm s}^-]=2{\\bm s}^z$, and generate $\\alg{su}(2)$ rotations between the local spin doublet $\\{\\ket{\\downarrow},\\ket{\\uparrow}\\}$. In addition it is useful to introduce the corresponding local charge operators\n\\begin{equation}\\label{eq:charge}\n \\mathlarger{\\bm \\eta}^z=\\frac{1}{2}({\\bm n}_{{\\mathsmaller{\\uparrow}}}+{\\bm n}_{{\\mathsmaller{\\downarrow}}}-1), ~~\\mathlarger{\\bm \\eta}^+ ={\\bm c}^\\dagger_{{\\mathsmaller{\\downarrow}}} {\\bm c}^\\dagger_{{\\mathsmaller{\\uparrow}}},~~ \\mathlarger{\\bm \\eta}^- = {\\bm c}_{{\\mathsmaller{\\uparrow}}} {\\bm c}_{{\\mathsmaller{\\downarrow}}},\n\\end{equation}\nwhich obey $[\\mathlarger{\\bm \\eta}^z,\\mathlarger{\\bm \\eta}^\\pm]=\\pm\\mathlarger{\\bm \\eta}^\\pm$ and $[\\mathlarger{\\bm \\eta}^+,\\mathlarger{\\bm \\eta}^-]=2\\mathlarger{\\bm \\eta}^z$, and generate $\\alg{su}(2)$ rotations between the local charge doublet $\\{\\ket{{\\mathlarger{\\mathlarger{\\circ}}}},\\ket{{\\mathlarger{\\mathlarger{\\bullet}}}}\\}$.\nWe choose the Hubbard interaction \n\\begin{equation}\n{\\bm V}^H=({\\bm n}_{{\\mathsmaller{\\uparrow}}}-1\/2)({\\bm n}_{{\\mathsmaller{\\downarrow}}}-1\/2),\n\\end{equation}\n to be of a particle-hole symmetric form, and the chemical potential $\\mu$ couples to the charge density.\n\n\nWe take the kinetic term to be of a general correlated form \n\\begin{equation}\\label{eq:CH}\n{\\bm T}_{ij} =t(1-\\lambda) {\\bm T}^{\\circ}_{ij}+t(1+\\lambda){\\bm T}^{\\bullet}_{ij}+t_\\pm ({\\bm T}^+_{ij}+{\\bm T}^-_{ij}),\n\\end{equation}\nwhere the three parameters $t$, $\\lambda$, $t_\\pm$ decouple the terms \n\\begin{equation}\n\\begin{split}\n{\\bm T}^{\\circ}_{ij} &=- \\sum_{\\sigma={\\mathsmaller{\\downarrow}},{\\mathsmaller{\\uparrow}}} \\big({\\bm c}^\\dagger_{i\\sigma} {\\bm c}_{j\\sigma} + {\\bm c}^\\dagger_{j\\sigma} {\\bm c}_{i\\sigma}\\big)\\bar{\\bm n}_{i{\\bar{\\sigma}}}\\bar{\\bm n}_{j{\\bar{\\sigma}}},\\\\\n{\\bm T}^{\\bullet}_{ij} &=- \\sum_{\\sigma={\\mathsmaller{\\downarrow}},{\\mathsmaller{\\uparrow}}} \\big({\\bm c}^\\dagger_{i\\sigma} {\\bm c}_{j\\sigma} + {\\bm c}^\\dagger_{j\\sigma} {\\bm c}_{i\\sigma}\\big){\\bm n}_{i{\\bar{\\sigma}}}{\\bm n}_{j{\\bar{\\sigma}}},\\\\\n{\\bm T}^+_{ij}&=- \\sum_{\\sigma={\\mathsmaller{\\downarrow}},{\\mathsmaller{\\uparrow}}} \\big({\\bm c}^\\dagger_{i\\sigma} {\\bm c}_{j\\sigma}{\\bm n}_{i{\\bar{\\sigma}}}\\bar{\\bm n}_{j{\\bar{\\sigma}}} + {\\bm c}^\\dagger_{j\\sigma} {\\bm c}_{i\\sigma}\\bar{\\bm n}_{i{\\bar{\\sigma}}}{\\bm n}_{j{\\bar{\\sigma}}}\\big),\\\\\n{\\bm T}^-_{ij}&=- \\sum_{\\sigma={\\mathsmaller{\\downarrow}},{\\mathsmaller{\\uparrow}}} \\big({\\bm c}^\\dagger_{i\\sigma} {\\bm c}_{j\\sigma}\\bar{\\bm n}_{i{\\bar{\\sigma}}}{\\bm n}_{j{\\bar{\\sigma}}} + {\\bm c}^\\dagger_{j\\sigma} {\\bm c}_{i\\sigma}{\\bm n}_{i{\\bar{\\sigma}}}\\bar{\\bm n}_{j{\\bar{\\sigma}}}\\big),\n\\end{split}\n\\end{equation}\nwith ${\\bar{\\sigma}}=-\\sigma$ and $\\bar{\\bm n}_{\\sigma}=1-{\\bm n}_{\\sigma}$. This allows for distinct hopping amplitudes depending on the occupancy of the two sites involved by electrons of the opposite spin, see Fig.~\\ref{fig_chop}.\n\\begin{figure}[tb]\n\\centering\n\\includegraphics[width=0.55\\columnwidth]{fig_chop.pdf}\n\\caption{\\label{fig_chop}\nCorrelated hopping is when the hopping amplitude depends on how the two sites are occupied by electrons of the opposite spin. Here we illustrate the four possibilities for a hopping spin-up electron (the final two of which are hermitian conjugate). We argue that decoupling these amplitudes from the uncorrelated limit may induce a splitting of the electron.\n}\n\\end{figure}\nCorrelated hopping is an important interaction in, for example, charge-transfer insulators \\cite{Fujimori_1984,Zaanen_1985}, a family of materials which includes the cuprates, when described by an effective single-orbital lattice model that eliminates the low-lying ligand $p$ orbital degree of freedom \\cite{Zhang_1988,Micnas89,MarsiglioHirsch,Sim_n_1993}.\nIn addition, it has recently been shown that correlated hopping can be induced as an effective interaction of ultracold atoms in periodically driven optical lattice setups \\cite{Rapp12,Liberto2014}.\nThe {$t$-$J$} model corresponds to an extreme form of correlated hopping $\\lambda=-1$, $t_\\pm=0$, which disallows hopping processes involving doubly occupied sites. While the Hubbard and {$t$-$J$} models are often regarded as good minimal models for characterising strong correlation effects, we will see that a rich and useful structure arises by considering this more general model which encompasses them both. \n\n\nConventional band theory is founded upon having a kinetic term that is bilinear in $\\Ocd_{\\s}$, a feature that is lost when there is correlated hopping. We can however re-express the kinetic term through the generators of a different algebra as follows\n\\begin{equation}\\label{eq:CHQ}\n{\\bm T}_{ij}=-\\sum_{\\sigma={\\mathsmaller{\\downarrow}},{\\mathsmaller{\\uparrow}}}\\sum_{\\nu={\\circ},{\\bullet}} t_\\nu\n\t\\big({\\bm q}^\\dagger_{i\\sigma\\nu} {\\bm q}_{j\\sigma\\nu} + {\\bm q}^\\dagger_{j\\sigma\\nu} {\\bm q}_{i\\sigma\\nu}\\big),\n\\end{equation}\nwhich is now bilinear in \n\\begin{equation}\\label{Q0s}\n\\begin{split}\n{\\bm q}^\\dagger_{\\sigma{\\circ}} &=\\frac{1+\\kappa}{2}{\\bm c}_{{\\bar{\\sigma}}} - \\kappa {\\bm n}_{\\sigma}{\\bm c}_{{\\bar{\\sigma}}},\\\\\n{\\bm q}^\\dagger_{\\sigma{\\bullet}} &={\\bar{\\sigma}}\\Big(\\frac{1-\\kappa}{2}{\\bm c}^\\dagger_{\\sigma} +\\kappa {\\bm n}_{{\\bar{\\sigma}}}{\\bm c}^\\dagger_{\\sigma}\\Big) ,\n\\end{split}\n\\end{equation}\nwith hopping parameters given by\n\\begin{equation}\\label{eq:CHparam}\nt_\\nu =\n\\Big(\\frac{2\\nu }{1+\\kappa^2}+\\frac{\\lambda}{\\kappa}\\Big)t,\n\\quad \\kappa=\\sqrt{\\frac{t-t_\\pm}{t+t_\\pm}},\n\\end{equation}\nwhere $\\sigma$ takes values $-1,1$ for $\\sigma=\\downarrow,\\uparrow$, and $\\nu$ takes values $-1,1$ for $\\nu={\\mathlarger{\\mathlarger{\\circ}}},{\\mathlarger{\\mathlarger{\\bullet}}}$ respectively. The $\\OQd_{\\s\\nu}$ are the fermionic generators of the graded Lie algebra ${\\su(2|2)}$ \\cite{Beisert07,Beisert08,HS1}, summarised in Appendix \\ref{app:su22}. \nTheir anti-commutation relations are \n\\begin{equation}\\label{eq:su22}\n\\begin{split}\n& \\{ {\\bm q}_{\\sigma\\nu}, {\\bm q}^\\dagger_{\\sigma\\nu}\\}= \\frac{1+\\kappa^2}{4}+\\kappa (\\nu \\mathlarger{\\bm \\eta}^z - \\sigma {\\bm s}^z),\\\\\n& \\{{\\bm q}_{{\\mathsmaller{\\downarrow}}\\nu}, {\\bm q}^\\dagger_{{\\mathsmaller{\\uparrow}}\\nu}\\}=\\kappa{ {\\bm s}}^+, ~~~~~~~ \n\t\\{ {\\bm q}_{\\sigma{\\circ}},{\\bm q}^\\dagger_{\\sigma{\\bullet}}\\}= \\kappa{ \\mathlarger{\\bm \\eta}}^+,\\\\\n& \\{ {\\bm q}_{{\\mathsmaller{\\uparrow}}\\nu}, {\\bm q}^\\dagger_{{\\mathsmaller{\\downarrow}}\\nu}\\}= \\kappa{ {\\bm s}}^-, ~~~~~~~\n\t\\{{\\bm q}_{\\sigma{\\bullet}}, {\\bm q}^\\dagger_{\\sigma{\\circ}}\\}= \\kappa{\\mathlarger{\\bm \\eta}}^-,\\\\\n& \\{ {\\bm q}_{\\sigma\\nu}, {\\bm q}_{\\sigma'\\nu'}\\}=\\{ {\\bm q}^\\dagger_{\\sigma\\nu}, {\\bm q}^\\dagger_{\\sigma'\\nu'}\\}=\\frac{1-\\kappa^2}{4}\\epsilon_{\\sigma' \\sigma}\t\t\\epsilon_{\\nu \\nu'} ,\n\\end{split}\n\\end{equation}\nwith $\\epsilon_{{\\mathsmaller{\\downarrow}}{\\mathsmaller{\\uparrow}}}=-\\epsilon_{{\\mathsmaller{\\uparrow}}{\\mathsmaller{\\downarrow}}}=\\epsilon_{{\\circ}{\\bullet}}=-\\epsilon_{{\\bullet}{\\circ}}=1$. They provide a non-canonical symmetry of the electronic degree of freedom, one that interplays with spin and charge.\nThe inversion of Eqs.~\\eqref{Q0s} takes a linear form\n\\begin{equation}\\label{inv_rels}\n{\\bm c}^\\dagger_{{\\mathsmaller{\\downarrow}}} ={\\bm q}_{{\\mathsmaller{\\uparrow}}{\\circ}}+{\\bm q}^\\dagger_{{\\mathsmaller{\\downarrow}}{\\bullet}},\\quad \n{\\bm c}^\\dagger_{{\\mathsmaller{\\uparrow}}} ={\\bm q}_{{\\mathsmaller{\\downarrow}}{\\circ}}-{\\bm q}^\\dagger_{{\\mathsmaller{\\uparrow}}{\\bullet}},\n\\end{equation}\nand we refer to this as a splitting of the electron, as opposed to `fractionalisation' which takes a product form.\n\n\nWhile graded Lie algebras are not commonly referred to by name in the physics literature, they are frequently used. Indeed, the canonical fermion algebra $\\{{\\bm c},{\\bm c}^\\dagger\\}=1$ is the graded Lie algebra $\\alg{su}(1|1)$. This is extended to $\\alg{u}(1|1)$ by adding ${\\bm n} = {\\bm c}^\\dagger {\\bm c}$, obeying $[{\\bm n} ,{\\bm c}^\\dagger]={\\bm c}^\\dagger$, $[{\\bm n} ,{\\bm c}]=-{\\bm c}$. The canonical algebra of Eq.~\\eqref{can_f} is $\\alg{su}(1|1)\\otimes\\alg{su}(1|1)$. This offers one way to characterise the electronic degree of freedom, which can be viewed as grouping the four electronic states as\n\\begin{equation}\n\\{ \\ket{{\\mathlarger{\\mathlarger{\\circ}}}};\\ket{\\downarrow}\\}\\otimes \\{ \\ket{{\\mathlarger{\\mathlarger{\\circ}}}};\\ket{\\uparrow}\\}.\n\\end{equation}\nThis canonical algebra underlies the Fermi liquid description of correlated matter.\n\n\nThe graded Lie algebra ${\\su(2|2)}$ offers an alternative way to characterise the electronic degree of freedom. Here it is useful to view the four states grouped as\n\\begin{equation}\n\\{ \\ket{\\downarrow},\\ket{\\uparrow}; \\ket{{\\mathlarger{\\mathlarger{\\circ}}}},\\ket{{\\mathlarger{\\mathlarger{\\bullet}}}}\\}.\n\\end{equation}\nThe algebra contains $\\alg{su}(2)$ spin generators $\\vec{{\\bm S}}$ acting on the first pair, $\\alg{su}(2)$ charge generators $\\vec{\\mathlarger{{\\bm \\eta}}}$ acting on the second pair, and fermionic generators $\\OQd_{\\s\\nu}$ which act between the two pairs. The anti-commutation relations of the $\\OQd_{\\s\\nu}$ are not canonical, but instead yield the generators $\\vec{{\\bm S}}$ and $\\vec{\\mathlarger{{\\bm \\eta}}}$ through Eqs.~\\eqref{eq:su22}.\nThe algebra can be extended to ${\\alg{u}(2|2)}$ by adding \n${\\bm \\theta}=\\kappa {\\bm V}^H=\\frac{\\kappa}{3}(\\vec{\\mathlarger{\\bm \\eta}}\\cdot\\vec{\\mathlarger{\\bm \\eta}}-\\vec{{\\bm s}}\\cdot\\vec{{\\bm s}})$, \nwhich obeys\n\\begin{equation}\\label{VHQ0}\n\\begin{split}\n\\lbrack {\\bm \\theta}, {\\bm q}^\\dagger_{\\sigma\\nu} \\rbrack &= \\frac{1+\\kappa^2}{4 } {\\bm q}^\\dagger_{\\sigma\\nu} +\\frac{1-\\kappa^2}{4 } \\epsilon_{\\sigma\\s'}\\epsilon_{\\nu\\nu'} {\\bm q}_{\\sigma'\\nu'},\\\\\n\\lbrack {\\bm \\theta}, {\\bm q}_{\\sigma\\nu} \\rbrack &= -\\frac{1+\\kappa^2}{4 } {\\bm q}_{\\sigma\\nu} -\\frac{1-\\kappa^2}{4 } \\epsilon_{\\sigma\\s'}\\epsilon_{\\nu\\nu'} {\\bm q}^\\dagger_{\\sigma'\\nu'},\n\\end{split}\n\\end{equation}\nand commutes with the spin and charge generators. This linear action of ${\\bm \\theta}$ has the consequence that the parameter $U$ plays a role akin to an additional chemical potential for the $\\OQd_{\\s\\nu}$ degrees of freedom, controlling their splitting. \nFor $\\kappa=1$, the algebra ${\\alg{u}(2|2)}$ is closely related to the Hubbard algebra \\cite{Hubbard4}, see Appendix~\\ref{app:su22}. The appearance of $\\kappa$ in the algebra formally corresponds to an exceptional central extension \\cite{Beisert07,Beisert08}. It has the role of suppressing the spin and charge generators in the anti-commutation relations Eqs.~\\eqref{eq:su22} for small $\\kappa$. We will exploit this to gain perturbative control over the growth of correlations. As $\\kappa\\to0$ the $\\OQd_{\\s\\nu}$ collapse pairwise onto the $\\Ocd_{\\s}$, the anti-commutation relations reduce to canonical relations of Eq.~\\eqref{can_f}, the kinetic term becomes uncorrelated, and ${\\bm \\theta}$ vanishes.\n\n\nWe thus see there are two possibilities for characterising the electronic degree of freedom: $\\alg{su}(1|1)\\otimes\\alg{su}(1|1)$ and ${\\su(2|2)}$. Both are graded algebras, which inherently take into account the grading of the four states of Eq.~\\eqref{4states}. The graded Lie algebras have been classified \\cite{kac1977lie}, and there do not appear to be other independent possibilities relevant for the single-orbital electronic problem.\n\n\nLet us emphasise that we will not consider to what extent these algebras provide explicit symmetries of a system. Instead we will examine how they govern the underlying degrees of freedom, \ni.e.~how they organise correlations. There is no fine tuning in this approach. \n\nThe canonical degree of freedom governs the Fermi liquid description of electronic matter. In the next two sections we will show \nthat ${\\su(2|2)}$ degrees of freedom underlie a controlled description of an alternative strongly correlated regime.\n\n\n\n\n\n\n\\section{Green's function analysis}\\label{sec:GF}\n\n\n\nIn the previous section we have identified two ways to characterise the electronic degree of freedom. We now demonstrate that they each offer a means to systematically organise the electronic correlations of an interacting system.\n\n\nWe focus our effort on obtaining the electronic Green's function. \nLet us first review how the imaginary-time formalism provides access to the retarded and advanced Green's functions\n\\begin{equation}\\label{GRA}\n\\begin{split}\nG^{\\mathrm{ret}}_{ij\\sigma}(t) &= - i \\Theta(t) \\braket{ \\{{\\bm c}_{i\\sigma}(t), {\\bm c}^\\dagger_{j\\sigma}(0)\\} } ,\\\\\nG^{\\mathrm{adv}}_{ij\\sigma}(t) &= i \\Theta(-t) \\braket{ \\{{\\bm c}_{i\\sigma}(t), {\\bm c}^\\dagger_{j\\sigma}(0)\\} },\n\\end{split}\n\\end{equation}\nwith $ \\Theta$ the Heaviside function.\nWe start with the imaginary-time thermal Green's function \n\\begin{equation}\\label{thGFcan}\n\\begin{split}\n{\\mathcal G_{ij\\sigma}}(\\tau)= &- \\braket{{\\bm c}_{i\\sigma}(\\tau) {\\bm c}^\\dagger_{j\\sigma}(0)}\\\\\n=&-\\frac{1}{\\mathcal Z}\\Tr \\Big(e^{-\\beta {\\bm H}}\\mathcal T\\big[{\\bm c}_{i\\sigma}(\\tau) {\\bm c}^\\dagger_{j\\sigma}(0)\\big]\\Big),\n\\end{split}\n\\end{equation}\nwhere $\\mathcal Z=\\Tr e^{-\\beta {\\bm H}}$, $\\beta$ is inverse temperature, ${\\bm a}(\\tau)= e^{\\tau{\\bm H}}{\\bm a} e^{-\\tau{\\bm H}}$, and $\\mathcal T$ is the $\\tau$-ordering operator which is antisymmetric under interchange of fermionic operators\n\\begin{equation}\n\\mathcal T\\big[{\\bm c}_{i\\sigma}(\\tau) {\\bm c}^\\dagger_{j\\sigma}(0)\\big] = \\Theta(\\tau) {\\bm c}_{i\\sigma}(\\tau) {\\bm c}^\\dagger_{j\\sigma}(0) - \\Theta(-\\tau) {\\bm c}^\\dagger_{j\\sigma}(0) {\\bm c}_{i\\sigma}(\\tau).\n\\end{equation}\nTaking the $\\tau$-derivative yields the equation of motion\n\\begin{equation}\\label{elEoM}\n\\partial_{\\tau} \\mathcal G_{ij\\sigma}(\\tau) = - \\delta(\\tau)\\delta_{ij}-\\braket{ [{\\bm H},{\\bm c}_{i\\sigma}(\\tau)] {\\bm c}^\\dagger_{j\\sigma}(0)}.\n\\end{equation}\nThe advantage over the real time equation of motion is the anti-periodic boundary condition \n$\\mathcal G_{ij\\sigma}(\\beta) = - \\mathcal G_{ij\\sigma}(0)$, which follows from the cyclicity of the trace and antisymmetry of $\\mathcal T$. The Fourier transform\n\\begin{equation}\\label{FT}\n\\mathcal G_{p\\sigma}(i\\omega_n) = \\frac{1}{\\mathcal V}\\sum_{i,j}\\int_0^{\\beta} d\\tau e^{\\mathrm i \\omega_n\\tau-\\mathrm i p(i-j)} \\mathcal G_{ij\\sigma}(\\tau),\n\\end{equation}\nis then defined at the Matsubara frequencies $\\omega_n=(2n+1)\\frac{\\pi}{\\beta}$, with $n\\in\\mathbb Z$, and $\\mathcal V$ is the total number of lattice sites. We define $G_{p\\sigma}(\\omega)$ by analytically continuing $\\mathcal G_{p\\sigma}(\\omega)$ to all non-real $\\omega$, provided it satisfies the causality condition that it has no singularities in this region. The retarded and advanced Green's functions are then obtained as\n\\begin{equation}\nG^{\\mathrm{ret}}_{p\\sigma}(\\omega)=G_{p\\sigma}(\\omega+\\mathrm i 0^+),\\quad G^{\\mathrm{adv}}_{p\\sigma}(\\omega)=G_{p\\sigma}(\\omega-\\mathrm i 0^+).\n\\end{equation}\n\n\nIt appears that the challenge of computing the Green's function revolves around solving the equation of motion, Eq.~\\eqref{elEoM}. For example if ${\\bm H}$ is bilinear in $\\Ocd_{\\s}$, say \n ${\\bm H}=- \\sum_{i,j,\\sigma} t_{ij} {\\bm c}^\\dagger_{i\\sigma} {\\bm c}_{j\\sigma}- \\mu\\sum_{i,\\sigma} {\\bm n}_{i\\sigma}$, then the equation of motion takes the form\n\\begin{equation}\\label{GF0can}\n\\begin{split}\n\\sum_k \\Big[ \\delta_{ik} \\big( &-\\partial_\\tau + \\mu\\big)+ t_{ik} \\Big] \\mathcal G_{kj\\sigma}(\\tau) = \\delta(\\tau)\\delta_{ij},\n\\end{split}\n\\end{equation}\nwhich upon Fourier transformation becomes \n\\begin{equation}\n(i\\omega_n+\\mu-\\varepsilon_p)\\mathcal G_{p\\sigma}(i\\omega_n) = 1,\n\\end{equation}\n with dispersion relation $\\varepsilon_p = -\\frac{1}{\\mathcal V}\\sum_{i,j} t_{ij} e^{\\mathrm i p(i-j)}$. Inverting, and analytically continuing $\\mathcal G_{p\\sigma}(\\omega)$ to all non-real $\\omega$, results in the non-interacting Green's function\n\\begin{equation}\nG_{p\\sigma}(\\omega) = \\frac{1}{\\omega+\\mu-\\varepsilon_p}.\n\\end{equation}\nThe Hamiltonian of Eq.~\\eqref{eq:ham} is not bilinear in $\\Ocd_{\\s}$ however. It contains both biquadratic and bicubic terms, and these induce correlations in the system. \n\n\nOne way to proceed is to investigate how the growth of correlations is controlled by Eq.~\\eqref{elEoM}, with a perturbative treatment of the interactions. This leads to the canonical description of correlated electrons which underlies the Fermi liquid \\cite{Abrikosov,kadanoff1962quantum}. We review this in Appendix~\\ref{app:canGF} for the case of spinless fermions. Our subsequent analysis parallels the discussion there, and the reader may find it useful to contrast the two. \n\n\nWe now however take an alternative route, and consider the Green's functions of the ${\\su(2|2)}$ degrees of freedom, e.g.~$\\braket{{\\bm q}_{i\\sigma\\nu}(\\tau) {\\bm q}^\\dagger_{j\\sigma'\\nu'}(0)}$. We will use their equation of motion to \ngain control of correlations, employing the Green's function factorisation technique recently pioneered by Shastry \\cite{Shastry_2011,Shastry_2013}. As the splitting of Eqs.~\\eqref{inv_rels} is linear, the electronic Green's functions $\\mathcal G_{ij\\sigma}(\\tau)$ are immediately reobtained through linear combinations of the ${\\su(2|2)}$ Green's functions. In this way we gain access to a regime of strongly correlated behaviour. \n\n\nWe will continue our analysis in an explicit manner. While this obscures the presentation to a certain extent, it has the benefit of avoiding ambiguity. We complement this with Appendix~\\ref{app:sch} which contains a schematic summary of the derivation.\n\n\n\nIt is useful to introduce some simplifying notations. We collect the fermionic generators as\n\\begin{equation}\n{\\bm \\psi}_i^\\alpha = \\left(\\begin{array}{cccccccc}\n{\\bm q}^\\dagger_{i{\\mathsmaller{\\uparrow}}{\\circ}}&{\\bm q}_{i{\\mathsmaller{\\downarrow}}{\\bullet}}& {\\bm q}^\\dagger_{i{\\mathsmaller{\\downarrow}}{\\circ}}&{\\bm q}_{i{\\mathsmaller{\\uparrow}}{\\bullet}}&\n {\\bm q}_{i{\\mathsmaller{\\uparrow}}{\\circ}}&{\\bm q}^\\dagger_{i{\\mathsmaller{\\downarrow}}{\\bullet}}&{\\bm q}_{i{\\mathsmaller{\\downarrow}}{\\circ}}&{\\bm q}^\\dagger_{i{\\mathsmaller{\\uparrow}}{\\bullet}}\n \\end{array}\\right),\n \\end{equation}\nwith greek indices, and the bosonic generators as\n\\begin{equation}\n {\\bm \\phi}_i^a = \\left(\\begin{array}{cccccc}\n {\\bm s}_i^z&{\\bm s}_i^-&{\\bm s}_i^+ &\\mathlarger{\\bm \\eta}_i^z&\\mathlarger{\\bm \\eta}_i^-&\\mathlarger{\\bm \\eta}_i^+ \n\\end{array}\\right),\n\\end{equation} \nwith latin indices. The ${\\su(2|2)}$ algebra is then compactly expressed as \n\\begin{equation}\\label{su22alg}\n\\begin{split}\n\\{ {\\bm \\psi}_i^\\alpha,{\\bm \\psi}_j^\\beta\\} & = \\delta_{ij} \\big( f^{\\alpha\\beta}{}_I + f^{\\alpha\\beta}{}_a {\\bm \\phi}_i^a\\big),\\\\\n [ {\\bm \\phi}_i^a,{\\bm \\psi}_j^\\beta] &= \\delta_{ij}f^{a \\beta}{}_\\gamma {\\bm \\psi}_i^\\gamma,\\quad\\\\\n [ {\\bm \\phi}_i^a,{\\bm \\phi}_j^b] &= \\delta_{ij}f^{ab}{}_c {\\bm \\phi}_i^c,\n \\end{split}\n\\end{equation}\n and the extension to ${\\alg{u}(2|2)}$ is given by\n\\begin{equation}\n [ {\\bm \\theta}_i,{\\bm \\psi}_j^\\alpha] = \\delta_{ij}f^{\\Theta \\alpha}{}_\\beta {\\bm \\psi}_i^\\beta,\\quad [ {\\bm \\theta}_i,{\\bm \\phi}_j^a] =0.\n\\end{equation}\nSummation over repeated algebraic indices is implied, and we collect the structure constants $f$ in Appendix~\\ref{app:params}. \n\n\nWe now consider the Hamiltonian \n\\begin{equation}\\label{hamTTR}\n\\begin{split}\n{\\bm H} =& -\\frac{1}{2}\\sum_{i,j} t_{ij,\\alpha\\beta} {\\bm \\psi}_i^\\alpha {\\bm \\psi}_{j}^\\beta \n\t+\\frac{1}{2}\\sum_{i,j} V_{ij,ab} {\\bm \\phi}_i^a {\\bm \\phi}_j^b\\\\\n\t & \\qquad - \\mu_{a} \\sum_i {\\bm \\phi}_i^a+\\tilde{U}\\sum_i {\\bm \\theta}_i,\n\\end{split}\n\\end{equation}\nwith hopping and interaction parameters obeying $t_{ii,\\alpha\\beta}=0$, $t_{ji,\\alpha\\beta}=t_{ij,\\alpha\\beta} $, $t_{ij,\\beta\\alpha}=-t_{ij,\\alpha\\beta} $ and $V_{ii,ab}=0$, $V_{ji,ab}=V_{ij,ab}$, $V_{ij,ba}=V_{ij,ab}$, chemical potentials $\\mu_a=( h~0~0~2\\mu~0~0)$, and $\\tilde{U}=U\/\\kappa$. This model is extremely general, as the sixteen generators $\\{8\\times {\\bm \\psi},6\\times {\\bm \\phi}, 1,{\\bm \\theta}\\}$ provide a complete basis for the local operators at each site.\nThis reflects the wide range of applicability of our approach, though we remind that it is important for the model to have correlated hopping. We include the specific hopping and interaction parameters corresponding to the Hamiltonian of Eq.~\\eqref{eq:ham} in Appendix~\\ref{app:params}. \n\nTo introduce the Green's function of the ${\\bm q}$ it is useful to first set a matrix structure via \n\\begin{equation}\\label{eq:metric}\n{\\bm \\psi}_{i\\alpha} = {\\bm \\psi}_i^\\beta K_{\\beta\\alpha}= \\big({\\bm \\psi}_i^\\alpha\\big)^\\dagger,\n\\end{equation}\ndefining a metric $K$, presented explicitly in Appendix~\\ref{app:params}. \nOur object of study is then the matrix Green's function \n\\begin{equation}\n\\mathcal G_{ij}{}^\\alpha_\\beta (\\tau,\\tau')= -{\\braket{{\\bm \\psi}_{i}^\\alpha(\\tau) {\\bm \\psi}_{j\\beta}(\\tau')}}.\n\\end{equation}\nAs highlighted above, the electronic Green's function is directly obtained from linear combinations of these, via Eqs.~\\eqref{inv_rels},\n\\begin{equation}\\label{GeGQ}\n\\begin{split}\n\\mathcal G_{ij{\\mathsmaller{\\downarrow}}}(\\tau) &= {\\mathcal G_{ij}}^1_1(\\tau)+{\\mathcal G_{ij}}^1_2(\\tau)+{\\mathcal G_{ij}}^2_1(\\tau)+{\\mathcal G_{ij}}^2_2(\\tau),\\\\\n\\mathcal G_{ij{\\mathsmaller{\\uparrow}}}(\\tau) &= {\\mathcal G_{ij}}^3_3(\\tau)-{\\mathcal G_{ij}}^3_4(\\tau)-{\\mathcal G_{ij}}^4_3(\\tau)+{\\mathcal G_{ij}}^4_4(\\tau),\n\\end{split}\n\\end{equation}\nwith $\\mathcal G_{ij}{}^\\alpha_\\beta (\\tau)=\\mathcal G_{ij}{}^\\alpha_\\beta (\\tau,0)$.\nIn addition, as the bosonic generators $\\vec{{\\bm S}}$ and $\\vec{\\mathlarger{{\\bm \\eta}}}$ are quadratic in ${\\bm c}$, see Eqs.~\\eqref{eq:spin} and \\eqref{eq:charge}, we can also use Eqs.~\\eqref{inv_rels} to obtain\n\\begin{equation}\\label{GtoT}\n\\begin{split}\n\\braket{{\\bm \\phi}_i^a(\\tau)} &= \\varphi^a{}^\\alpha_\\beta \\mathcal G_{ii}{}^\\beta_\\alpha (\\tau,\\tau^+),\n\\end{split}\n\\end{equation}\nwith coefficients $\\varphi^a{}^\\alpha_\\beta$ which are independent of $\\kappa$, presented explicitly in Appendix~\\ref{app:params}.\n\n\nAlthough the Hamiltonian is at most bilinear in the generators of ${\\su(2|2)}$, correlations are nevertheless induced as a result of the non-canonical nature of the algebra. \nTo handle these we incorporate sources for the ${\\bm \\phi}$ into the imaginary-time thermal expectation value as follows\n\\begin{equation}\\label{ev_source}\n \\braket{ \\mathcal O(\\tau_1,\\tau_2,\\ldots)} = \\frac{\\Tr \\Big( e^{-\\beta {\\bm H}} \\mathcal T \\big[e^{\\int_0^\\beta d\\tau \\mathcal S(\\tau)} \\mathcal O(\\tau_1,\\tau_2,\\ldots) \\big] \\Big)}{\\Tr \\big( e^{-\\beta H} \\mathcal T [e^{\\int_0^\\beta d\\tau \\mathcal S(\\tau)}] \\big)},\n\\end{equation}\nwith $\\mathcal S(\\tau) = \\sum_i J_{ia}(\\tau) {\\bm \\phi}^a_i(\\tau)$, and we consider all $\\tau$ to take values on the interval $(0,\\beta)$.\nThe source term breaks translational invariance in both time and space, providing a means of organising correlations by trading bosonic correlations for their variations through \n\\begin{equation}\n\\begin{split}\n\\nabla_i^a(\\tau)\\braket{\\mathcal O(\\tau_1,\\tau_2,\\ldots,\\tau_n)}= &\\braket{{\\bm \\phi}^a_i(\\tau)\\mathcal O(\\tau_1,\\tau_2,\\ldots,\\tau_n)}\\\\\n\t& -\\braket{{\\bm \\phi}^a_i(\\tau)}\\braket{\\mathcal O(\\tau_1,\\tau_2,\\ldots,\\tau_n)},\n\\end{split}\n\\end{equation}\nwhere $\\nabla_i^a(\\tau) = \\frac{{\\delta} }{ {\\delta} J_{ia}(\\tau^+)}$ denotes the functional derivative, and $\\tau^+=\\tau+0^+$ incorporates an infinitesimal regulator which ensures a consistent ordering when $\\tau$ is one of the $\\tau_1,\\tau_2,\\ldots,\\tau_n$. At the end of the computation the sources will be set to zero without difficulty, restoring translational invariance. \n\n\n\nAs for the electronic Green's function, there is again the anti-periodic boundary condition \n\\begin{equation}\n\\mathcal G_{ij}{}^\\alpha_\\beta(\\beta,\\tau) = - \\mathcal G_{ij}{}^\\alpha_\\beta(0,\\tau).\n\\end{equation}\nThe equation of motion \n\\begin{equation}\n\\begin{split}\n\\partial_{\\tau} \\mathcal G_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = -\\delta(\\tau-\\tau')\\braket{\\{{\\bm \\psi}_{i}^\\alpha(\\tau), {\\bm \\psi}_{j\\beta}(\\tau)\\}}&\\\\\n\t + \\braket{[\\mathcal S(\\tau), {\\bm \\psi}^\\alpha_i(\\tau)]{\\bm \\psi}_{j\\beta}(\\tau')}&\\\\\n\t - \\braket{ [{\\bm H},{\\bm \\psi}^\\alpha_i(\\tau)]{\\bm \\psi}_{j\\beta}(\\tau') }&,\n\\end{split}\n\\end{equation}\npicks up an additional contribution from the source term, a consequence of the $\\tau$-ordering operator. \nThe first two terms are straightforwardly evaluated from Eqs.~\\eqref{su22alg}\n\\begin{equation}\n\\begin{split}\n\\braket{\\{{\\bm \\psi}_{i}^\\alpha(\\tau), {\\bm \\psi}_{j\\beta}(\\tau)\\}} &=\\delta_{ij}\\big( f^{\\alpha\\gamma}{}_I + f^{\\alpha\\gamma}{}_a \\braket{{\\bm \\phi}_i^a(\\tau)}\n\t\\big)K_{\\gamma\\beta},\\\\\n\\braket{[\\mathcal S(\\tau), {\\bm \\psi}^\\alpha_i(\\tau)]{\\bm \\psi}_{j\\beta}(\\tau')} &= - f^{a\\alpha}{}_\\gamma J_{ia}(\\tau) \\mathcal G_{ij}{}^\\gamma_\\beta(\\tau,\\tau').\n\\end{split}\n\\end{equation}\n\\begin{widetext}\\noindent\nThe commutator in the final term is\n\\begin{equation}\n\\begin{split}\n\\lbrack{\\bm H},{\\bm \\psi}^\\alpha_i\\rbrack =\n\t\\sum_{l}\\big[ f^{\\alpha\\delta}{}_I t_{il,\\delta\\gamma} {\\bm \\psi}_l^\\gamma\n\t+ f^{\\alpha\\delta}{}_a t_{il,\\delta\\gamma} {\\bm \\phi}_i^a {\\bm \\psi}_l^\\gamma\\big]\n\t+ \\sum_{l}f^{a\\alpha}{}_\\gamma V_{il,ab}{\\bm \\phi}_l^b {\\bm \\psi}_i^\\gamma \n\t- \\mu_a f^{a\\alpha}{}_\\gamma {\\bm \\psi}_i^\\gamma + \\tilde{U} f^{\\Theta\\alpha}{}_\\gamma {\\bm \\psi}_i^\\gamma,\n\\end{split}\n\\end{equation}\nand, recasting the bosonic correlations as variations of the sources, we obtain\n\\begin{equation}\n\\begin{split}\n\\braket{\\lbrack{\\bm H},{\\bm \\psi}^\\alpha_i(\\tau)\\rbrack {\\bm \\psi}_{j\\beta}(\\tau')}=& (\\mu_a f^{a\\alpha}{}_\\gamma - \\tilde{U} f^{\\Theta\\alpha}{}_\\gamma) \\mathcal G_{ij}{}^\\gamma_\\beta(\\tau,\\tau') \n\t -\\sum_l f^{a\\alpha}{}_\\gamma V_{il,ab} \\big(\\braket{{\\bm \\phi}_l^b(\\tau)}\n\t\t+ \\nabla_l^b(\\tau) \\big) \\mathcal G_{ij}{}^\\gamma_\\beta(\\tau,\\tau')\\\\\n\t&- \\sum_l f^{\\alpha\\delta}{}_I t_{il,\\delta\\gamma} \\mathcal G_{lj}{}^\\gamma_\\beta(\\tau,\\tau')\n\t- \\sum_l f^{\\alpha\\delta}{}_a t_{il,\\delta\\gamma} \\big(\n\t\t\\braket{{\\bm \\phi}_i^a(\\tau)}+\\nabla_i^a(\\tau)\\big) \\mathcal G_{lj}{}^\\gamma_\\beta(\\tau,\\tau').\n\\end{split}\n\\end{equation} \nCollecting these expressions, the equation of motion takes the form\n\\begin{equation}\\label{GFeqn}\n\\begin{split}\n\\sum_k \\Big[ \n\\delta_{ik}\\Big(-\\delta^\\alpha_\\gamma \\partial_{\\tau} -f^{a\\alpha}{}_\\gamma J_{ia}(\\tau) \n - \\mu_a f^{a\\alpha}{}_\\gamma + \\tilde{U} f^{\\Theta\\alpha}{}_\\gamma + \\sum_l f^{a\\alpha}{}_\\gamma V_{il,ab} \\big(\\braket{{\\bm \\phi}_l^b(\\tau)}\n\t\t + \\nabla_l^b(\\tau) \\big)\\Big) ~~~~~~~~~~~~~~~ &\\\\\n+ f^{\\alpha\\delta}{}_I t_{ik,\\delta\\gamma}\n\t+f^{\\alpha\\delta}{}_a t_{ik,\\delta\\gamma} \\big( \\braket{{\\bm \\phi}_i^a(\\tau)} \n\t + \\nabla_i^a(\\tau) \\big)\\Big] \\mathcal G_{kj}{}^\\gamma_\\beta(\\tau,\\tau')&\\\\\n= \\delta(\\tau-\\tau')\\delta_{ij}\\big(f^{\\alpha\\gamma}{}_I+ f^{\\alpha\\gamma}{}_a\\braket{{\\bm \\phi}^a_i(\\tau)}\\big) K_{\\gamma\\beta}&.\n\\end{split}\n\\end{equation}\n\nWe want to obtain solutions to this equation. Its analogue in the canonical case is Eq.~\\eqref{canGFeqn}, to which it has a very similar structure. The primary complication of the non-canonical degree of freedom is the appearance of $\\braket{{\\bm \\phi}}$ on the right-hand side, which indicates that the spectral weight of the Green's function is dressed by correlations. Here it depends explicitly on $\\mathcal G$ through Eq.~\\eqref{GtoT}. A technique for overcoming this difficulty has been pioneered by Shastry \\cite{Shastry_2011,Shastry_2013}: the trick is to factorise $\\mathcal G$ into its numerator and denominator, and obtain a coupled controlled description of both \\cite{Shastry11_Anatomy}. In practice we write \\footnote{The asymmetry in this factorisation $\\mathcal G=\\mathpzc g\\mathpzc w$ results from considering the equation of motion $\\partial_\\tau \\mathcal G(\\tau,\\tau')$. Alternatively we could consider $\\partial_{\\tau'} \\mathcal G(\\tau,\\tau')$, and then factorise the Green's function as $\\mathcal G=\\mathpzc w\\mathpzc g$.}\n\\begin{equation}\\label{Shansatz}\n\\mathcal G_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = \\sum_l \\int_0^\\beta d\\tau'' \\mathpzc g_{il}{}^\\alpha_\\gamma(\\tau,\\tau'') \\mathpzc w_{lj}{}^\\gamma_\\beta(\\tau'',\\tau').\n\\end{equation}\nThe functional derivative in Eq.~\\eqref{GFeqn} then gives two contributions\n\\begin{equation}\n\\nabla_l(\\tau'') \\mathcal G_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = \\sum_k \\int_0^\\beta d\\tau''' \\Big[ \\Big(\\nabla_l(\\tau'')\\mathpzc g_{ik}{}^\\alpha_\\gamma(\\tau,\\tau''')\\Big) \\mathpzc w_{kj}{}^\\gamma_\\beta(\\tau''',\\tau')+ \\mathpzc g_{ik}{}^\\alpha_\\gamma(\\tau,\\tau''') \\Big(\\nabla_l(\\tau'')\\mathpzc w_{kj}{}^\\gamma_\\beta(\\tau''',\\tau')\\Big)\\Big].\n\\end{equation}\nSubstituting these into Eq.~\\eqref{GFeqn}, and bringing the terms with $\\nabla\\mathpzc w$ to the right-hand side, \npermits a factorisation of the equation of motion. \nSetting\n\\begin{equation}\\label{eq:cW}\n\\begin{split}\n \\mathpzc w_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = \\delta(\\tau-\\tau')\\delta_{ij}\\big(f^{\\alpha\\gamma}{}_I+ f^{\\alpha\\gamma}{}_a\\braket{{\\bm \\phi}^a_i(\\tau)}\\big)K_{\\gamma\\beta} -\\sum_{k,l}\\int_0^\\beta d\\tau'' \\Big( \n &f^{\\alpha\\delta}{}_a t_{il,\\delta\\epsilon} \\mathpzc g_{lk}{}^\\epsilon_\\gamma(\\tau,\\tau'') \\nabla^a_i(\\tau)\\mathpzc w_{kj}{}^\\gamma_\\beta(\\tau'',\\tau') \\\\\n &+f^{a\\alpha}{}_\\delta V_{il,ab} \\mathpzc g_{ik}{}^\\delta_\\gamma(\\tau,\\tau'') \\nabla^b_l(\\tau)\\mathpzc w_{kj}{}^{\\gamma}_\\beta(\\tau'',\\tau') \\Big),\n\\end{split}\n \\end{equation}\n fixes the ratio between the two factors in Eq.~\\eqref{Shansatz},\nwith the remainder satisfying \n\\begin{equation}\\label{eq:gT}\n\\begin{split}\n\\sum_k \\Big[ \n\\delta_{ik}\\Big(-\\delta^\\alpha_\\gamma \\partial_{\\tau} -f^{a\\alpha}{}_\\gamma J_{ia}(\\tau) \n - \\mu_a f^{a\\alpha}{}_\\gamma + \\tilde{U} f^{\\Theta\\alpha}{}_\\gamma + \\sum_l f^{a\\alpha}{}_\\gamma V_{il,ab} \\big(\\braket{{\\bm \\phi}_l^b(\\tau)}\n\t\t + \\nabla_l^b(\\tau) \\big)\\Big)& \\\\\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~ + f^{\\alpha\\delta}{}_I t_{ik,\\delta\\gamma}\n\t+ f^{\\alpha\\delta}{}_a t_{ik,\\delta\\gamma} \\big( \\braket{{\\bm \\phi}_i^a(\\tau)} \n\t + \\nabla_i^a(\\tau) \\big)&\\Big] \\mathpzc g_{kj}{}^\\gamma_\\beta(\\tau,\\tau')\t= \\delta(\\tau-\\tau')\\delta_{ij}\\delta^\\alpha_\\beta.\n\\end{split}\n\\end{equation}\nThese two coupled equations are an exact rewriting of the equation of motion Eq.~\\eqref{GFeqn}.\nWe call $\\mathpzc g$ the canonised Green's function and $\\mathpzc w$ the spectral weight.\n\n\n\nWe proceed by introducing two functionals $\\Sigma[\\mathpzc g,\\mathpzc w]$ and $ \\cW[\\mathpzc g,\\mathpzc w]$ of the full $\\mathpzc g$ and $\\mathpzc w$ as follows \\footnote{Refs.~\\cite{Shastry_2011,Shastry_2013} treats these as functionals of $\\mathpzc g$ only, corresponding to a perturbative expansion of $\\mathpzc w$.}. We \ndefine the self-energy $\\Sigma$ through\n\\begin{equation}\\label{gTI}\n\\mathpzc g^{-1}_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = \\mathpzc g_{0,ij}^{-1}{}^\\alpha_\\beta(\\tau,\\tau') - \\Sigma_{ij}{}^\\alpha_\\beta(\\tau,\\tau'),\n\\end{equation}\nwhere $\\mathpzc g_0$ satisfies\n\\begin{equation}\n \\Big[ \n\\delta_{ik}\\big(-\\delta^\\alpha_\\gamma \\partial_{\\tau} -f^{a\\alpha}{}_\\gamma J_{ia}(\\tau) \n - \\mu_a f^{a\\alpha}{}_\\gamma + \\tilde{U} f^{\\Theta\\alpha}{}_\\gamma\\big) + f^{\\alpha\\delta}{}_I t_{ik,\\delta\\gamma}\n \t\t\\Big] \\mathpzc g_{0,kj}{}^\\gamma_\\beta(\\tau,\\tau')\\\\\n\t= \\delta(\\tau-\\tau')\\delta_{ij}\\delta^\\alpha_\\beta,\n\\end{equation}\nand the adaptive spectral weight $\\cW$ through\n\\begin{equation}\\label{wT}\n\\mathpzc w_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = \\mathpzc w_{0,ij}{}^\\alpha_\\beta(\\tau,\\tau') + \\cW_{ij}{}^\\alpha_\\beta(\\tau,\\tau'),\n\\end{equation}\nwith\n\\begin{equation}\n\\mathpzc w_{0,ij}{}^\\alpha_\\beta(\\tau,\\tau')= \\delta(\\tau-\\tau')\\delta_{ij}f^{\\alpha\\gamma}{}_I K_{\\gamma\\beta}.\n\\end{equation}\nWe obtain a closed equation for $\\Sigma$ by convolving Eq.~\\eqref{eq:gT} on the right with $\\mathpzc g^{-1}$, which gives\n\\begin{equation}\\label{Sg0}\n\\begin{split}\n\\Sigma_{ij}{}^\\alpha_\\beta(\\tau,\\tau') =& - \\delta(\\tau-\\tau')\\Big(\n\tf^{\\alpha\\gamma}{}_a t_{ij,\\gamma\\beta} \\braket{{\\bm \\phi}_i^a(\\tau)} +\n\t\\delta_{ij} \\sum_l f^{a\\alpha}{}_\\beta V_{il,ab} \\braket{{\\bm \\phi}_l^b(\\tau)}\n\t\\Big)\\\\\n&- \\delta(\\tau-\\tau')\\Big(\n\t\\delta_{ij}\\sum_l f^{\\alpha\\delta}{}_a t_{il,\\delta\\epsilon} \\mathpzc g_{li}{}^\\epsilon_\\gamma(\\tau,\\tau^+)f^{a\\gamma}{}_\\beta\n \t+f^{a\\alpha}{}_\\delta V_{ij,ab} \\mathpzc g_{ij}{}^\\delta_\\gamma(\\tau,\\tau^+)f^{b\\gamma}{}_\\beta \n\t\\Big)\\\\\n&-\\sum_{k,l}\\int_0^\\beta d\\tau'' \\Big(\n\tf^{\\alpha\\delta}{}_a t_{il,\\delta\\epsilon} \\mathpzc g_{lk}{}^\\epsilon_\\gamma(\\tau,\\tau'') \\nabla_i^a(\\tau)\\Sigma_{kj}{}^{\\gamma}_\\beta(\\tau'',\\tau')\n\t+f^{a\\alpha}{}_\\delta V_{il,ab} \\mathpzc g_{ik}{}^\\delta_\\gamma(\\tau,\\tau'') \\nabla_l^b(\\tau)\\Sigma_{kj}{}^{\\gamma}_\\beta(\\tau'',\\tau')\n\t\\Big),\n\\end{split}\n\\end{equation}\nupon using $(\\nabla \\mathpzc g)\\mathpzc g^{-1}=-\\mathpzc g\\nabla\\mathpzc g^{-1}= -\\mathpzc g\\nabla\\mathpzc g_0^{-1} + \\mathpzc g\\nabla\\Sigma$, with\n\\begin{equation}\n\\nabla_l^a(\\tau'')\\mathpzc g_{0,ij}^{-1}{}^\\alpha_\\beta(\\tau,\\tau')=-\\delta(\\tau-\\tau')\\delta(\\tau-\\tau''-0^+)\\delta_{ij}\\delta_{il}f^{a\\alpha}{}_\\beta.\n\\end{equation}\nA closed equation for $\\cW$ follows directly from Eq.~\\eqref{eq:cW},\n\\begin{equation}\\label{SW0}\n\\begin{split}\n \\cW_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = \\delta(\\tau-\\tau')\\delta_{ij} f^{\\alpha\\gamma}{}_aK_{\\gamma\\beta} \\braket{{\\bm \\phi}^a_i(\\tau)} -\\sum_{k,l}\\int_0^\\beta d\\tau'' \\Big( \n &f^{\\alpha\\delta}{}_a t_{il,\\delta\\epsilon} \\mathpzc g_{lk}{}^\\epsilon_\\gamma(\\tau,\\tau'') \\nabla^a_i(\\tau)\\cW_{kj}{}^\\gamma_\\beta(\\tau'',\\tau') \\\\\n &+f^{a\\alpha}{}_\\delta V_{il,ab} \\mathpzc g_{ik}{}^\\delta_\\gamma(\\tau,\\tau'') \\nabla^b_l(\\tau)\\cW_{kj}{}^\\gamma_\\beta(\\tau'',\\tau')\\Big).\n\\end{split}\n \\end{equation}\n\nEquations \\eqref{Sg0} and \\eqref{SW0} are exact. We now obtain successive approximate solutions with a perturbative expansion in $\\kappa$. We introduce rescaled parameters $\n\\tilde{f}^{\\alpha\\beta}{}_a= f^{\\alpha\\beta}{}_a\/\\kappa$, $\\tilde{V}_{ij,ab} = V_{ij,ab}\/\\kappa$,\nso that $t_{ij,\\alpha\\beta}$, $\\tilde{V}_{ij,ab}$, $f^{a\\alpha}{}_\\beta$ and $\\tilde{f}^{\\alpha\\beta}{}_a$ are all independent of $\\kappa$, and write $\\Sigma = \\sum_{s=0}^\\infty \\kappa^s [\\Sigma]_s$ and $\\cW = \\sum_{s=0}^\\infty \\kappa^s [\\cW]_s$. The\nleading contributions are\n\\begin{equation}\\label{SW1}\n\\begin{split}\n\\lbrack \\Sigma_{ij}{}^\\alpha_\\beta(\\tau,\\tau') \\rbrack_1= &-\\delta(\\tau-\\tau')\\sum_{k}\\int_0^\\beta d\\tau'' \\Big(\n\t\\tilde{f}^{\\alpha\\gamma}{}_a t_{ij,\\gamma\\beta} \\varphi^a{}^\\rho_\\sigma \\mathpzc g_{ik}{}^\\sigma_\\lambda(\\tau,\\tau'') \\mathpzc w_{ki}{}^\\lambda_\\rho(\\tau'',\\tau^+)\\\\\n\t&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +\\delta_{ij} \\sum_l f^{a\\alpha}{}_\\beta \\tilde{V}_{il,ab} \n\t\t\\varphi^b{}^\\rho_\\sigma \\mathpzc g_{lk}{}^\\sigma_\\lambda(\\tau,\\tau'') \\mathpzc w_{kl}{}^\\lambda_\\rho(\\tau'',\\tau^+)\n\t\\Big),\\\\\n&- \\delta(\\tau-\\tau')\\Big(\n\t\\delta_{ij}\\sum_l \\tilde{f}^{\\alpha\\delta}{}_a t_{il,\\delta\\epsilon} \\mathpzc g_{li}{}^\\epsilon_\\gamma(\\tau,\\tau^+)f^{a\\gamma}{}_\\beta\n \t+f^{a\\alpha}{}_\\delta \\tilde{V}_{ij,ab} \\mathpzc g_{ij}{}^\\delta_\\gamma(\\tau,\\tau^+)f^{b\\gamma}{}_\\beta \n\t\\Big)\\\\\n[ \\cW_{ij}{}^\\alpha_\\beta(\\tau,\\tau') ]_1=& \\delta(\\tau-\\tau')\\delta_{ij}\\sum_{k}\\int_0^\\beta d\\tau'' \n\t \\tilde{f}^{\\alpha\\gamma}{}_a K_{\\gamma\\beta} \\varphi^a{}^\\rho_\\sigma \\mathpzc g_{ik}{}^\\sigma_\\lambda(\\tau,\\tau'') \\mathpzc w_{ki}{}^\\lambda_\\rho(\\tau'',\\tau^+).\n\\end{split}\n\\end{equation}\nHigher order terms are then obtained recursively through\n\\begin{equation}\\label{SgWrec}\n\\begin{split}\n\\lbrack\\Sigma_{ij}{}^\\alpha_\\beta(\\tau,\\tau')\\rbrack_{s+1} & = -\\sum_{k,l}\\int_0^\\beta d\\tau'' \\Big(\n\t\\tilde{f}^{\\alpha\\delta}{}_a t_{il,\\delta\\epsilon} \\mathpzc g_{lk}{}^\\epsilon_\\gamma(\\tau,\\tau'') \\nabla_i^a(\\tau)[\\Sigma_{kj}{}^{\\gamma}_\\beta(\\tau'',\\tau')]_s\n\t+f^{a\\alpha}{}_\\delta \\tilde{V}_{il,ab} \\mathpzc g_{ik}{}^\\delta_\\gamma(\\tau,\\tau'') \\nabla_l^b(\\tau)[\\Sigma_{kj}{}^{\\gamma}_\\beta(\\tau'',\\tau')]_s\n\t\\Big),\\\\\n[\\cW_{ij}{}^\\alpha_\\beta(\\tau,\\tau')]_{s+1} &= -\\sum_{k,l}\\int_0^\\beta d\\tau'' \\Big( \n\t\\tilde{f}^{\\alpha\\delta}{}_a t_{il,\\delta\\epsilon} \\mathpzc g_{lk}{}^\\epsilon_\\gamma(\\tau,\\tau'') \\nabla^a_i(\\tau)[\\cW_{kj}{}^\\gamma_\\beta(\\tau'',\\tau')]_s \t\n\t+f^{a\\alpha}{}_\\delta \\tilde{V}_{il,ab} \\mathpzc g_{ik}{}^\\delta_\\gamma(\\tau,\\tau'') \\nabla^b_l(\\tau)[\\cW_{kj}{}^\\gamma_\\beta(\\tau'',\\tau')]_s\n\t\\Big).\n\\end{split}\n\\end{equation}\nThese depend on the sources only through $\\mathpzc g$ and $\\mathpzc w$, and at each order we need use only the leading contributions from\n\\begin{equation}\n\\nabla_l^a(\\tau'') \\mathpzc g_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = \\mathpzc g_{il}{}^\\alpha_\\gamma(\\tau,\\tau'') f^{a\\gamma}{}_\\delta \\mathpzc g_{lj}{}^\\delta_\\beta(\\tau'',\\tau')+\\mathcal O(\\kappa),\\quad \\nabla_l^a(\\tau'') \\mathpzc w_{ij}{}^\\alpha_\\beta(\\tau,\\tau') = 0+\\mathcal O(\\kappa),\n\\end{equation}\n\n\\newpage\n\\end{widetext} \\noindent\nwhere here we have suppressed the infinitesimal regulator. In this way we can systematically construct the functionals $\\Sigma[\\mathpzc g,\\mathpzc w]$ and $ \\cW[\\mathpzc g,\\mathpzc w]$ to any desired order. We provide the second order contributions explicitly in Appendix~\\ref{app:2nd}. \n\n\n\n\n\nWe have thus succeeded in our goal. We have obtained a series of successive approximations for the Green's function, mirroring the self-energy expansion of the canonical case. Let us summarise. Upon expanding $\\Sigma[\\mathpzc g,\\mathpzc w]$ and $\\cW[\\mathpzc g,\\mathpzc w]$ to some desired order, the zero source limit is straightforwardly taken as $J$ enters only through $\\mathpzc g_0$. Equations~\\eqref{gTI} and \\eqref{wT} then provide a set of coupled self-consistent equations for $\\mathpzc g$ and $\\mathpzc w$. The solutions can be combined to give $\\mathcal G$, and the electronic Green's function is in turn obtained from Eqs.~\\eqref{GeGQ}.\n\nThe simplest approximation is to take $\\mathcal G=\\mathpzc g_0 \\mathpzc w_0$. We will examine this in the following section, and find that it captures an essential feature of ${\\su(2|2)}$ degrees of freedom: a splitting of the electronic dispersion. The next approximation is to take just the first order contributions to the self-energy and adaptive spectral weight from Eqs.~\\eqref{SW1}. This is the analogue of the Hartree-Fock approximation for the canonical case, see Eq.~\\eqref{HF}, and likewise captures static correlations. The effects of collisions can be examined by including the second order contributions of Eqs.~\\eqref{SW2}. \n\n\n\n\n \n\n\n\n\\section{A controlled approximation} \\label{sec:approx}\n\n\nIn the previous section we have derived a systematic framework for characterising interacting electrons with ${\\su(2|2)}$ degrees of freedom. We now take the simplest approximation, $\\mathcal G=\\mathpzc g_0 \\mathpzc w_0$, and investigate the resulting electronic Green's function. The unexpanded $\\mathpzc g_0$ and $\\mathpzc w_0$ contain explicit dependence on $\\kappa$ through the structure constants $f^{\\alpha\\beta}{}_I$ and $f^{\\Theta\\alpha}{}_\\beta$, expressed in Appendix~\\ref{app:params}. That is, we are not setting $\\kappa=0$, but rather are truncating the expansions of $\\Sigma$ and $\\cW$ at the zeroth order.\nThe full dependence on the Hubbard interaction, as well as some of the hopping correlations, enter already here. The affect of the approximation is to suppress all spin and charge correlations. In particular, the Heisenberg spin-exchange interaction does not contribute at this order. \n\n\n\nFirst we obtain the matrix Green's function of the ${\\bm q}$. Setting the sources to zero, and recombining $\\mathpzc g_0$ and $\\mathpzc w_0$, the equation of motion\nin this approximation becomes\n\\begin{widetext}\n\\begin{equation}\n\\begin{split}\n\\sum_k \\Big[ \n\\delta_{ik}\\big(-\\delta^\\alpha_\\gamma \\partial_{\\tau} \n - \\mu_a f^{a\\alpha}{}_\\gamma + \\tilde{U} f^{\\Theta\\alpha}{}_\\gamma \\big) \n+ f^{\\alpha\\delta}{}_I t_{ik,\\delta\\gamma}\n\t\\Big] \\mathcal G_{kj}{}^\\gamma_\\beta(\\tau,\\tau')\n= \\delta(\\tau-\\tau')\\delta_{ij} f^{\\alpha\\gamma}{}_I K_{\\gamma\\beta}.\n\\end{split}\n\\end{equation}\n\\end{widetext}\n\\noindent\nIt is sufficient to restrict the greek indices to run over $\\{1,2,3,4\\}$.\nFourier transforming, performing matrix inversion, and analytically continuing to all non-real $\\omega$, we obtain\n\\begin{equation}\n{G_p}^\\alpha_\\beta(\\omega)= \n \\left(\\begin{array}{cccc}\n {\\mathsf g}^- & {\\mathsf h} & 0 & 0 \\\\\n {\\mathsf h} & {\\mathsf g}^+ & 0 & 0 \\\\\n 0 & 0 & {\\mathsf g}^- & -{\\mathsf h} \\\\\n 0 & 0 & -{\\mathsf h} & {\\mathsf g}^+ \\\\\n \\end{array}\\right),\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{split}\n{\\mathsf g}^\\pm&=\\frac{(1+\\kappa^2)(\\omega+\\mu) -\\frac{2\\kappa ^2 \\varepsilon_p}{1+\\kappa^2} \\pm\\kappa^2 (\\tilde{U}+\\tilde{\\lambda} \\varepsilon_p)}{4\\big(\\omega+\\mu -\\frac{\\varepsilon_p}{1+\\kappa^2}\\big) \\big(\\omega+\\mu -\\frac{\\kappa ^2\\varepsilon_p}{1+\\kappa^2} \\big)-\\kappa^2(\\tilde{U}+\\tilde{\\lambda} \\varepsilon_p)^2},\\\\\n{\\mathsf h}&=\\frac{(1-\\kappa ^2) (\\omega+\\mu )}{4\\big(\\omega+\\mu - \\frac{\\varepsilon_p}{1+\\kappa^2} \\big) \\big(\\omega+\\mu - \\frac{\\kappa^2 \\varepsilon_p}{1+\\kappa^2} \\big)-\\kappa^2 (\\tilde{U}+\\tilde{\\lambda} \\varepsilon_p)^2},\n\\end{split}\n\\end{equation}\nwith non-interacting dispersion $\\varepsilon_p =- \\frac{t}{\\mathcal V} \\sum_{\\braket{i,j}} e^{\\mathrm i p(i-j)}$, and $\\tilde{\\lambda}=\\lambda\/\\kappa$. \n \n\n\nThe electronic Green's function can now be immediately obtained via Eqs.~\\eqref{GeGQ}, yielding\n\\begin{equation}\\label{eqG}\nG_{p\\sigma}(\\omega) = \\frac{1}{\\omega +\\mu - \\frac{\\varepsilon_p}{1+\\kappa^2} - \\frac{\\kappa^2}{4} \\frac{(\\tilde{U}+\\tilde{\\lambda} \\varepsilon_p)^2}{\\omega +\\mu -\\frac{\\kappa^2 \\varepsilon_p}{1+\\kappa^2}}}.\n\\end{equation}\nWe choose $t$, $\\kappa$, $\\tilde{U}$ and $\\tilde{\\lambda}$ to parametrise the model, and ascribe the following roles: $t$ controls the strength of dispersion, $\\kappa$ controls the strength of correlations, $\\tilde{U}$ controls the band splitting, and $\\tilde{\\lambda}$ controls asymmetry. They are related to the original parameters of the model by \n\\begin{equation}\\label{chparams}\n\\kappa= \\sqrt{\\frac{t-t_\\pm}{t+t_\\pm}},~~\\tilde{U}=\\frac{U}{\\kappa},~~\\tilde{\\lambda}=\\frac{\\lambda}{\\kappa}.\n\\end{equation}\nWhile it may be tempting to view the term with prefactor $\\frac{\\kappa^2}{4}$ in the denominator as a self-energy, we suggest this would be a misinterpretation of the degrees of freedom. This is clarified by rewriting Eq.~\\eqref{eqG} as \n\\begin{equation}\\label{eqGS}\nG_{p\\sigma}(\\omega) = \\frac{a_{p{\\circ}} }{\\omega+\\mu - \\omega_{p{\\circ}}} + \\frac{a_{p{\\bullet}}}{\\omega+\\mu - \\omega_{p{\\bullet}}},\n\\end{equation}\nwhich makes manifest the splitting of Eq.~\\eqref{inv_rels}. \nThere are now two dispersive bands, which we label with $\\nu={\\mathlarger{\\mathlarger{\\circ}}},{\\mathlarger{\\mathlarger{\\bullet}}}$ as follows\n\\begin{equation}\n\\omega_{p\\nu} = \\frac{\\varepsilon_p}{2} +\\frac{\\nu}{2} \\sqrt{\\Big(\\frac{1-\\kappa^2}{1+\\kappa^2}\\Big)^2\\varepsilon_p^2+\\kappa^2 \\big(\\tilde{U}+\\tilde{\\lambda} \\varepsilon_p\\big)^2},\n\\end{equation}\nand the electronic spectral weight is split between them\n\\begin{equation}\na_{p\\nu} = \\frac{1}{2}+\\frac{\\nu}{2}\\frac{\\frac{1-\\kappa^2}{1+\\kappa^2}\\varepsilon_p}{\\sqrt{\\big(\\frac{1-\\kappa^2}{1+\\kappa^2}\\big)^2\\varepsilon_p^2+\\kappa^2 \\big(\\tilde{U}+\\tilde{\\lambda} \\varepsilon_p\\big)^2}},\n\\end{equation}\nwith $a_{p{\\circ}}+a_{p{\\bullet}}=1$.\nThis is in sharp contrast with the canonical perspective, i.e.~conventional band theory, where the entire electronic spectral weight is locked together in a single band. \n\n\n\n\n\\begin{figure}[!]\n\\centering\n\\includegraphics[width=0.99\\columnwidth]{plot_BS.pdf}\n\\caption{\\label{plot_BS}\nThe splitting of the electronic dispersion, exemplified on a square lattice. \nWe focus on the symmetric case $\\tilde{\\lambda}=0$, and $J$ does not contribute at this order of approximation.\nThe left panel shows the electronic density of states (DOS), $\\sum_\\sigma\\int_{BZ}\\frac{d^2p}{(2\\pi)^2} A_{p\\sigma}(\\omega)$, with the contributions of the ${\\mathlarger{\\mathlarger{\\circ}}}$ (blue, lower) and ${\\mathlarger{\\mathlarger{\\bullet}}}$ (red, upper) bands distinguished. \nThe central panel shows the band structure, an intensity plot of the electronic spectral function (with Lorentzian broadening of $10^{-3}$), along the $\\Gamma$-$X$-$M$-$\\Gamma$ high-symmetry path in the Brillouin zone. The right panel shows the spectral weights $a_{p{\\circ}}$ and $a_{p{\\bullet}}$, which are momentum independent at this order of approximation. \nThe horizontal lines (a)-(d) indicate slices along which the spectral function is plotted in Fig.~\\ref{plot_FS}. Four examples (i)-(iv) of couplings $\\kappa$ and $\\tilde{U}$ are presented: (i) on leaving the non-interacting point the band structure splits with the introduction of weak and flat dispersion near $\\omega=0$, and the 2d Van Hove singularity of the DOS also splits in two. (ii) when $\\tilde{U}$ is increased above $\\tilde{U}_M=\\frac{8}{1+\\kappa^2}$ the two bands separate and a Mott gap opens. (iii) as the strength of correlated hopping is amplified the two bands overlap significantly for $\\tilde{U}<\\tilde{U}_M$. (iv) as $\\kappa$ approaches 1 the two bands decouple, each with half the weight of an electron, and $\\tilde{U}$ behaves as an additional chemical potential that shifts the bands oppositely in $\\omega$.\n}\n\\end{figure}\n\n\n\nA Mott metal-insulator transition takes place when a gap opens between the two bands, i.e.~when $ \\max_p \\omega_{p{\\circ}} = \\min_p \\omega_{p{\\bullet}}$. For $\\tilde{\\lambda}=0$ and $W=2\\max_p \\varepsilon_p=-2\\min_p \\varepsilon_p$, the transition occurs at $\\tilde{U}_M= \\frac{W}{1+\\kappa^2}$. The nature of the transition bears a close resemblance to a band insulator transition, but we emphasise the essential role of electronic correlations is reflected in the splitting of the electronic spectral weight across the gap. \nThis differs from the Brinkmann--Rice description of the Mott transition as the spectral weight $a_{p\\nu}$ does not go continuously to zero as the gap is opened \\cite{PhysRevB.2.4302}, and from the doublon-holon binding description \\cite{doublon-holon} as there is no rearrangement of the degrees of freedom coincident with the Mott transition. \nThe splitting of the electronic band is reminiscent of the foundational work of Hubbard \\cite{Hubbard1,Hubbard3}, though our approach is very different from the large-$U$ perspective taken there.\n\n\nIt is illustrative to plot the electronic spectral function\n\\begin{equation}\n\\begin{split}\nA_{p\\sigma}(\\omega) &= - \\frac{1}{\\pi} \\im G^\\mathrm{ret}_{p\\sigma}(\\omega)\\\\\n&=a_{p{\\circ}} \\delta(\\omega - \\omega_{p{\\circ}}) + a_{p{\\bullet}}\\delta(\\omega - \\omega_{p{\\bullet}}).\n\\end{split}\n\\end{equation}\n We focus on the example of nearest-neighbour hopping on the square lattice, with dispersion relation $\\varepsilon_p=-2 \\cos p_x -2\\cos p_y$, setting $t=1$. In Fig.~\\ref{plot_BS} we plot the frequency dependence of the spectral function along the $\\Gamma$-$X$-$M$-$\\Gamma$ high-symmetry path in the Brillouin zone for a choice of values of $\\kappa$ and $\\tilde{U}$, with $\\tilde{\\lambda}=0$. We also set $\\mu=0$, but as $\\mu$ enters Eq.~\\eqref{eqGS} solely as a shift of $\\omega$, the results for non-zero chemical potential correspond to translating the plots vertically in $\\omega$. The figure helps to visualise how the two bands emerge from a single band in the non-interacting limit, via hybridisation with an additional band carrying vanishing spectral weight. \nAs the interactions are increased the two bands separate, and for $\\tilde{U}>\\tilde{U}_M$ a Mott gap is observed, with the vanishing of the carrier density at the transition evident through the density of states. \n\n\\begin{figure}[tb]\n\\centering\n\\includegraphics[width=0.7\\columnwidth]{plot_FS.pdf}\n\\caption{\\label{plot_FS}\nPlots of the electronic spectral function throughout the 2d Brillouin zone on the slices (a)-(d) indicated in Fig.~\\ref{plot_BS}. The violation of the Luttinger sum rule can be seen by contrasting between (a) and (b), both of which correspond to below half-filling: while less than half the Brillouin zone is enclosed in (a), this is clearly not the case in (b). In (c) and (d) the appearance of two surfaces is in sharp contrast to conventional band theory.\n}\n\\end{figure}\n\nFigure~\\ref{plot_FS} displays cross sections of Fig.~\\ref{plot_BS}, \nshowing the spectral function throughout the 2d Brillouin zone for a choice of $\\mu$, $\\kappa$ and $\\tilde{U}$. \nThis reveals surfaces which violate the Luttinger sum rule \\footnote{The generalised sense of Luttinger's theorem argued in Ref.~\\cite{Dzyaloshinskii_2003} is also violated, with the exception of the particle-hole symmetric case (i.e. here when $\\tilde{\\lambda}=\\mu=0$) for which it has been proven to be true \\cite{PhysRevB.75.104503,PhysRevB.96.085124}.}, clearly evidenced by contrasting Figs.~\\ref{plot_FS}.(a) and \\ref{plot_FS}.(b).\nThis is indeed reasonable as the sum rule relies on the existence of the Luttinger-Ward functional, which is tied to canonical characterisation of interactions \\cite{LuttingerWard_1960,Luttinger_1960}.\nThe violation can be understood as a consequence of the non-canonical nature of the ${\\su(2|2)}$ degrees of freedom, for which the non-trivial spectral weight unties the link between electron density and Luttinger volume.\n\n\nIn summary, we have found that ${\\su(2|2)}$ degrees of freedom govern a regime of behaviour which is fundamentally distinct from a Fermi liquid. \n\n\n\n\n\n\\section{Discussion} \\label{sec:disc}\n\nThe standard way to characterise the behaviour of interacting electrons is through perturbation theory from the non-interacting limit, built upon canonical degrees of freedom \\cite{Abrikosov}. This logic is supported both by Landau's arguments on the robustness of the Fermi liquid \\cite{landau1957,landau1959}, and Shankar's renormalisation group analysis \\cite{Shankar_1994}. The approach has had great success, it underlies our understanding of a wide variety of materials.\n\n\nHere we have identified a distinct way to characterise the electronic degree of freedom, and have demonstrated that it permits a description of a regime of behaviour different from the Fermi liquid. \nWe have cast the electronic problem through the generators of the graded Lie algebra ${\\su(2|2)}$, and have shown how this provides a way to systematically organise the effects of electronic correlations. We have focused on the leading contribution, which reveals a splitting in two of the electronic band, see Fig.~\\ref{plot_BS}. \nThe Luttinger sum rule is violated, and a carrier-number-vanishing Mott metal-insulator transition is exhibited.\nThis reveals a scenario beyond Shankar's analysis, as that is formulated with canonical fermion coherent states which lack the freedom to capture the splitting of Eq.~\\eqref{inv_rels}.\n\n\nThe canonical description is expected to capture metallic behaviour for $\\kappa\\ll U$, i.e.~when any correlations in hopping are weak. We expect that ${\\su(2|2)}$ degrees of freedom may govern behaviour when the parameters $\\tilde{U}$, $\\tilde{\\lambda}$ of Eq.~\\eqref{chparams} are $\\mathcal O(1)$, in particular for $U\\sim \\kappa$ at small $\\kappa$. We represent this schematically in Fig.~\\ref{fig_kU}. There is an argument to be made that the two regimes extend to either side of the point $\\kappa=1$, $U=\\infty$. On the one hand, one can consider departing from the degenerate atomic limit through a continuous unitary transformation organised in powers of $t\/U$ \\cite{Stein_1997}, \nbut this breaks down when $t_\\pm\\sim t\/U$, i.e.~close to $\\kappa=1$.\nOn the other, the parametrisation of Eq.~\\eqref{eq:CHparam} is \ndiscontinuous at $t=t_\\pm=0$, and equivalence with \\eqref{eq:CH} requires that $t$ is not taken to zero first, i.e.~to approach the atomic limit keeping $\\kappa\\sim1$.\nThese singular behaviours can be attributed to the fact that the Hubbard interaction commutes with correlated hopping when $\\kappa=1$. \nSee Ref.~\\cite{Hidden_structure} for a closely related discussion. Let us also comment here that the framework pursued by Shastry sets $U=\\infty$ from the outset, and this plays an important role throughout his analysis \\cite{Shastry_2011,Shastry_2013}. \n\n\n\n\\begin{figure}[tb]\n\\centering\n\\includegraphics[width=0.85\\columnwidth]{fig_kU2.pdf}\n\\caption{\\label{fig_kU}\nA schematic depiction of how metallic behaviour may be governed by either canonical or ${\\su(2|2)}$ degrees of freedom in different regions of parameter space.\nHere correlated hopping, controlled by $\\kappa$, and onsite repulsion, controlled by $U$, compete to organise the electronic degree of freedom in distinct ways. While the ${\\su(2|2)}$ regime is restricted to small $U$ for small $\\kappa$, it may extend to large $U$ when $\\kappa$ is $\\mathcal O(1)$. We speculate on the nature of the `transition' between the two regimes towards the end of the Discussion.}\n\\end{figure}\n\n\n\nAn important question is whether there exist materials whose behaviour is governed by ${\\su(2|2)}$?\nWe consider the pseudogap regime found in the cuprates to be an ideal candidate, \nas it is a metallic state with a distinct non-Fermi liquid character \\cite{Timusk_1999,Hashimoto_2014,Fradkin_2015}.\nThe cuprates are charge-transfer insulators \\cite{Fujimori_1984,Zaanen_1985}, and their electronic structure suggests they should admit an effective single-orbital description with the eliminated low-lying ligand $p$ orbital inducing significant correlations in the hopping amplitudes \\cite{Zhang_1988,Micnas89,MarsiglioHirsch,Sim_n_1993}. \nQuantum oscillation experiments indicate a clear violation of the Luttinger sum rule as the pseudogap regime is entered, and furthermore that the carrier density vanishes as the Mott transition is approached, see e.g.~Fig.~4.b of Ref.~\\cite{Badoux_2016}.\n \n \n \nA next step is to go beyond the leading approximation upon which we focused in Sec.~\\ref{sec:approx}. Indeed, this is important to fully characterise the ${\\su(2|2)}$ regime of behaviour. Incorporating the first order contributions to the self-energy and adaptive spectral weight from Eqs.~\\eqref{SW1} will capture the leading contributions of static spin and charge correlations. This is the analogue of the Hartree-Fock approximation for the canonical case. In the context of cuprate physics it would be interesting to investigate if the phenomenological Yang--Rice--Zhang ansatz \\cite{YRZ,YRZ_rev}, which bears a similar form to Eq.~\\eqref{eqG}, can be justified in this way.\nAnother direction is to establish the thermodynamic properties of the regime. \nWe hope such studies will clarify the relevance of ${\\su(2|2)}$ degrees of freedom for characterising the behaviour of strongly correlated materials.\n \n \n The details of the underlying lattice have not played an important role in our analysis. In practice, it is good to have translational invariance as Eqs.~\\eqref{SgWrec} generate a local expansion, which is most conveniently handled in momentum space. \n\n\n\nA special case is when the lattice is a one-dimensional chain. Here the low-energy degrees of freedom are generically spin-charge separated \\cite{HaldaneLL,GiamarchiLL}. Thus we do not expect ${\\su(2|2)}$ degrees of freedom to govern behaviour there, just as canonical fermions do not govern behaviour away from the non-interacting limit \\cite{DL74}. \nInstead, degrees of freedom in one dimension are truly interacting. They can be characterised by their behaviour at integrable limits, where scattering becomes completely elastic but remains non-trivial \\cite{ZAMOLODCHIKOV1979253}, \nallowing for a complete description of the energy spectrum in terms of stable particles \\cite{Bethe31,TakBook}. The classification of such integrable models is understood within the framework of algebraic Bethe ansatz \\cite{Faddeev_2016}. It is noteworthy that the primary integrable models relevant for interacting electrons \\cite{Hbook,EKS,AlcarazBariev,HS1} descend from an R-matrix governed by the exceptional central extension of ${\\su(2|2)}$ symmetry we use here \\cite{Beisert07,HS1}, or a q-deformation thereof \\cite{BeisertKoroteev}. Indeed, the present work was greatly motivated by a combined study of these models \\cite{Hidden_structure}.\n\n\n\nAnother important case is that of infinite dimensions, i.e. when the coordination number of the lattice diverges. While the notion of local degrees of freedom disappears in this limit, dynamical correlations can survive. The frequency dependent electronic Green's function of the Hubbard model can be determined in an exact way here through dynamical mean-field theory \\cite{Metzner_1989,DMFT}. There exist works which incorporate correlated hopping into the formalism \\cite{PhysRevB.67.075101,StanescuKotliar,PEREPELITSKY2013283}, but unfortunately we have not found an explicit study of the effect of correlated hopping on the electronic spectral function. We hope that this may be achieved, as it will provide a complementary controlled perspective on our description of the Mott transition.\n\n\nThe splitting of the electron in Eq.~\\eqref{inv_rels} admits an interpretation in terms of slave particles. Slave bosons $\\Obd_{\\s}$ and fermions $\\Ofd_{\\nu}$ fractionalise the canonical fermion generators as \n\\begin{equation}\\label{cansp}\n{\\bm c}^\\dagger_{\\sigma} ={\\bm b}^\\dagger_{\\sigma} {\\bm f}_{{\\circ}} + \\epsilon_{\\sigma\\s'} {\\bm f}^\\dagger_{{\\bullet}} {\\bm b}_{\\sigma'},\n\\end{equation}\nor alternatively by interchanging $\\Obd_{\\s} \\leftrightarrow \\Ofd_{\\nu}$ \\cite{Barnes_1976,Coleman_1984,Arovas_1988,Yoshioka_1989}.\nThey are often invoked to characterise strongly correlated electrons \\cite{Senthil_2003,LNWrev}. \nThe $\\OQd_{\\s\\nu}$ of ${\\su(2|2)}$ can be viewed as a decoupling of the two contributions to Eq.~\\eqref{cansp} as follows \n\\begin{equation}\n{\\bm q}^\\dagger_{\\sigma\\nu}=\\frac{1+\\kappa}{2}{\\bm f}^\\dagger_{\\nu} {\\bm b}_{{\\bar{\\sigma}}}+\\frac{1-\\kappa}{2}\\epsilon_{{\\bar{\\sigma}}\\sigma'}\\epsilon_{\\nu\\nu'} {\\bm b}^\\dagger_{\\sigma'} {\\bm f}_{\\nu'}.\n\\end{equation}\nDescriptions of correlated matter where deconfined slave particles govern the behaviour require emergent gauge fields \\cite{BaskaranAnderson}. This is not the case with the $\\OQd_{\\s\\nu}$ however, which can be viewed as binding the $\\Obd_{\\s}$ and $\\Ofd_{\\nu}$ to gauge invariant degrees of freedom. Such a binding has been considered from a phenomenological perspective in the context of cuprate physics \\cite{PhysRevLett.76.503,PhysRevB.71.172509}.\n\n\n\n\n\nFinally we offer a more general perspective. We have argued that there exist two distinct regimes of electronic behaviour, governed either by canonical $\\alg{su}(1|1)\\otimes\\alg{su}(1|1)$ or non-canonical ${\\su(2|2)}$ degrees of freedom, which are Fermi liquid and non-Fermi liquid respectively, see Fig.~\\ref{fig_kU}. This raises the question: what happens in between? A phase transition in a conventional sense does not seem possible, as there is no clear notion of order parameter. Instead, each regime may be characterised by a quasi-particle description, where correlations are controlled in perturbative manner by distinct sets of degrees of freedom. While it is possible to connect the two regimes in a controlled way through the non-interacting point, this is highly singular due to the enhanced symmetry there, see Fig.~\\ref{plot_BS}.(i). Instead, we suggest that connecting the two regimes along a generic path requires the breakdown of a quasi-particle description in between. This mirrors a previous proposal in an identical setting in one dimension \\cite{Hidden_structure}. \n\n\nMore specifically, the robustness of the Fermi liquid owes to the fact that the lifetimes of the electronic quasi-particles scale as $(\\omega-\\varepsilon_F)^{-2}$, guaranteeing their stability in the vicinity of the Fermi surface. A `transition' may however occur if correlations shrink the domain over which this scaling is valid to zero. That is, the Fermi liquid may be destroyed by `coherence closing', while the spectrum remains gapless. Such a quantum chaotic regime would permit a rearrangement of the spectrum, allowing in turn for a rearrangement of the electronic degree of freedom.\n\n\nAgain, the cuprates offer a prime candidate for identifying such behaviour in a material setting. They exhibit a `strange metal' regime, lying between the pseudogap and Fermi liquid regimes in their phase diagram, where the featureless linear in temperature resistivity has defied a quasi-particle interpretation \\cite{Martin_1990,Chien_1991,Hussey_2011}. \nOur description of `coherence closing' is consistent with the phenomenological marginal Fermi liquid description of this regime \\cite{marginalFL}. \nEstablishing the necessity for a breakdown of a quasi-particle description in this way would provide a fresh starting point for understanding the anomalous behaviour there. \n\nFrom the perspective of either set of degrees of freedom, the intermediate regime is where correlations grow out of control. Characterising such behaviour requires an alternative framework, not built upon \nunderlying degrees of freedom. An intriguing possibility is holographic duality, which offers a controlled description through the semi-classical regime of a dual gravity theory \\cite{zaanen2015holographic,hartnoll2016holographic}. \n\n\n\n\n\n\n\n\n\\section{Conclusion} \\label{sec:conc}\n\n\nCharacterising the behaviour of interacting electrons is an outstanding challenge, despite many decades of effort. \nHere we have offered a novel approach, based around characterising the electronic degree of freedom.\n\nWe have argued that strong electronic correlations are governed by the graded Lie algebra ${\\su(2|2)}$, as opposed to the canonical fermion algebra which underlies the Fermi liquid. \nWe have derived a controlled description by obtaining a series of successive approximations for the electronic Green's function, mirroring the self-energy expansion of the canonical case. \nFocusing on the leading approximation, we found a splitting in two of the electronic band, a violation of the Luttinger sum rule, and a Mott transition when the split bands separate.\n\n\nMuch work is required to further characterise this non-Fermi liquid regime.\nUltimately, we hope this will lead to efficient techniques for understanding materials whose behaviour is driven by strong electronic correlations.\n\n\n\n\n\n\n\n\\section*{Acknowledgments}\nWe thank Jean-S\\'ebastien Caux, Philippe Corboz, Sergey Frolov, Mark Golden, Enej Ilievski, Jasper van Wezel and Jan Zaanen for useful discussions. Support from the Foundation for Fundamental Research on Matter (FOM) and the Netherlands Organization for Scientific Research (NWO) is gratefully acknowledged.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nInspired by the ground-breaking results coming from the Atacama Large\n(sub)Millimeter Array, and the Jansky Very Large Array, the\nastronomical community is considering a future large area radio array\noptimized to perform imaging of thermal emission down to\nmilliarcsecond scales. Currently designated the `Next Generation Very\nLarge Array,' such an array would entail ten times the effective\ncollecting area of the JVLA and ALMA, operating from 1GHz to 115GHz,\nwith ten times longer baselines (300km) providing mas-resolution, plus\na dense core on km-scales for high surface brightness imaging. Such an\narray bridges the gap between ALMA, a superb submillimeter array, and\nthe future Square Kilometer Array phase 1 (SKA-1), optimized for few\ncentimeter and longer wavelengths. The ngVLA opens unique new\nparameter space in the imaging of thermal emission from cosmic objects\nranging from protoplanetary disks to distant galaxies, as well as\nunprecedented broad band continuum polarimetric imaging of non-thermal\nprocesses. \n\nWe are considering the current VLA site as a possible location, in the\nhigh desert plains of the Southwest USA. At over 2000m elevation, this\nregion provides good observing conditions for the frequencies under\nconsideration, including reasonable phase stability and opacity at 3mm\nover a substantial fraction of the year (see JVLA and ngVLA memos by\nOwen 2015, Clark 2015, Carilli 2015, Butler 2002).\n\nOver the last year, the astronomical community has been considering\npotential science programs that would drive the design of a future\nlarge area facility operating in this wavelength range. These goals\nare described in a series of reports published as part of the ngVLA\nmemo series, and can be found in the ngVLA memo series:\n\n\\begin{center}\n{\\url{http:\/\/library.nrao.edu\/ngvla.shtml}}\n\\end{center}\n\n\\begin{itemize}\n\n\\item Isella et al., 2015, 'Cradle of Life' (ngVLA Memo 6)\n\n\\item Leroy et al., 2015, 'Galaxy Ecosystems' (ngVLA Memo 7)\n\n\\item Casey et al. 2015, 'Galaxy Assembly through Cosmic Time' (ngVLA Memo 8)\n\n\\item Bower et al. 2015, 'Time Domain, Cosmology, Physics' (ngVLA Memo 9)\n\n\\end{itemize}\n\nThe white papers will be expanded with new ideas, and more detailed\nanalyses, as the project progresses (eg. a white paper on\nmagneto-plasma processes on scales from the Sun to clusters of\ngalaxies is currently in preparation). In the coming months, the\nproject will initiate mechanisms to further expand the ngVLA science\nprogram, through continued community leadership.\n\nSuch a facility will have broad impact on many of the paramount\nquestions in modern astronomy. The science working groups are in the\nprocess of identifying a number of key science programs that push the\nrequirements of the telscope. Three exciting programs that have\ncome to the fore thus far, and {\\sl that can only be done with the\nngVLA}, include:\n\n\\begin{itemize}\n\n\\item {\\bf Imaging the 'terrestrial-zone' of planet formation in\nprotoplanetary disks}: Probing dust gaps on 1AU scales at the distance\nof the nearest major star forming regions (Taurus and Ophiucus\ndistance $\\sim$ 130pc) requires baselines 10 times that of the JVLA,\nwith a sensitivity adequate to reach a few K brightness at 1cm\nwavelength and 9mas resolution. Note that these inner regions of\nprotoplanetry disks are optically thick at shorter wavelengths (see\nsection 4.1). The ngVLA will image the gap-structures indicating\nplanet formation on solar-system scales, determine the growth of\ngrains from dust to pebbles to planets, and image accretion onto the\nproto-planets themselves.\n\n\\item {\\bf ISM and star formation physics on scales from GMCs down to\ncloud cores throughout the local super-cluster}: a centrally condensed\nantenna distribution on scales of a few km (perhaps up to 50\\%\nof the total collecting area), is required for wide field, high\nsurface brightness (mK) sensitivity. The ngVLA covers the spectral\nrange richest in the ground state transitions of the most important\nmolecules in astrochemistry and astrobiology, as well as key thermal\nand non-thermal continuum emission process relating to star\nformation. The ngVLA will perform wide field imaging of line and\ncontinuum emission on scales from GMCs (100pc) down to clump\/cores\n(few pc) in galaxies out to the Virgo Cluster.\n\n\\item {\\bf A complete census of the cold molecular gas fueling the\nstar formation history of the Universe back to the first galaxies:}\noctave bandwidth at $\\sim 1$cm wavelength, is required for large\ncosmic volume surveys of low order CO emission from distant galaxies\n(the fundamental tracer of total gas mass), as well as for dense gas\ntracers such as HCN and HCO+. The spatial resolution and sensitivity\nwill also be adequate to image gas dynamics on sub-kpc scales and detect\nmolecular gas masses down to dwarf galaxies.\n\n\\end{itemize}\n\nIn this summary paper, we present a general description of the\nproject, basic design goals for sensitivity and resolution, and the\nunique observational parameter space opened by such a revolutionary\nfacility. We emphasize that the ngVLA is a project under\ndevelopment. While the broad parameter space is reasonably well\ndelineated, there are many issues to explore, ranging from element\ndiameter to the number of frequency bands to the detailed array\nconfiguration, including consideration of VLBI-length baselines (see\nsection 2.2). The science white papers are identifying the primary\nscience use cases that will dictate the ultimate design of the\ntelescope, in concert with the goal of minimization of construction\nand operations costs. The requirements will mature with time, informed\nby ALMA, the JVLA, the imminent JWST and thirty meter-class optical\ntelescopes, and others.\n\n\\section{Telescope specifications}\n\n\\subsection{Basic array}\n\nIn Table 1 we summarize the initial telescope specifications for the\nngVLA. As a first pass, we present numbers for an 18m diameter\nantenna, although the range from 12m to 25m is being considered. A\nkey design goal is good antenna performance at higher frequency,\neg. at least 75\\% efficiency at 30GHz. The nominal frequency range of\n1GHz to 115GHz is also under discussion. The bandwidths quoted are\npredominantly 2:1, or less, although broader bandwidths are being\ninvestigated. Receiver temperatures are based on ALMA and VLA\nexperience. We emphasize that these specifications are a first pass at\ndefining the facility, and that this should be considered an evolving\nstudy.\n\nBrightness sensitivity for an array is critically dependent on the\narray configuration. We are assuming an array of 300 antennas in this\ncurrent configuration. The ngVLA has the competing desires of both\ngood point source sensitivity at full resolution for few hundred km\nbaselines, and good surface brightness sensitivity on scales\napproaching the primary beam size. Clark \\& Brisken (2015) explore\ndifferent array configurations that might provide a reasonable\ncompromise through judicious weighting of the visibilities for a given\napplication (see eg. Lal et al. 2010 for similar studies for the\nSKA). It is important to recognize the fact that for any given\nobservation, from full resolution imaging of small fields, to imaging\nstructure on scales approaching that of the primary beam, some\ncompromise will have to be accepted. \n\nFor the numbers in Table 1, we have used the Clark\/Conway\nconfigurations described in ngVLA memos 2 and 3. Very briefly, this\narray entails a series of concentric 'fat-ring' configurations out to\na maximum baseline of 300km, plus about 20\\% of the area in a compact\ncore in the inner 300m. The configuration will be a primary area for\ninvestigation in the coming years. We have investigated different\nBriggs weighting schemes for specific science applications, and find\nthat the Clark\/Conway configuration provides a reasonable starting\ncompromise for further calculation (see notes to Table 1).\n\n\\begin{table}\n\\footnotesize\n\\caption{Next Generation VLA nominal parameters}\n\\label{tlab}\n\\begin{tabular}{lccccc}\\hline\n~ & 2GHz & 10GHz & 30GHz & 80GHz & 100GHz \\\\ \n\\hline\nField of View FWHM (18m$^a$) arcmin & 29 & 5.9 & 2 & 0.6 & 0.51 \\\\\nAperture Efficiency (\\%) & 65 & 80 & 75 & 40 & 30 \\\\\nA$_{eff}^b$ x$10^4$ m$^2$ & 5.1 & 6.2 & 5.9 & 3.1 & 2.3 \\\\\nT$_{sys}^c$ K & 29 & 34 & 45 & 70 & 80 \\\\\nBandwidth$^d$ GHz & 2 & 8 & 20 & 30 & 30 \\\\\nContinuum rms$^e$ 1hr, $\\mu$Jy bm$^{-1}$ & 0.93 & 0.45 & 0.39 & 0.96 & 1.48 \\\\\nLine rms 1hr, 10 km s$^{-1}$, $\\mu$Jy bm$^{-1}$ & 221 & 70 & 57 & 100 & 130 \\\\\nResolution$^f$ FWHM milliarcsec & 140 & 28 & 9.2 & 3.5 & 2.8 \\\\\nT$_B^g$ rms continuum 1hr K & 14 & 7 & 6 & 15 & 23 \\\\\nLine$^h$ rms 1hr, $1\"$, 10 km s$^{-1}$, $\\mu$Jy bm$^{-1}$ & 340 & 140 & 240 & 860 & -- \\\\\nT$_B^i$ rms line, 1hr, $1\"$, 10 km s$^{-1}$, K & 100 & 1.8 & 0.32 & 0.17 & -- \\\\\n\\hline\n\\vspace{0.1cm}\n\\end{tabular}\n$^a$Under investigation: antenna diameters from 12m to 25m are being considered. \\\\ \n$^b$300 x 18m antennas with given efficiency. \\\\\n$^c$Current performance of JVLA below 50GHz. Above 70GHz we assume the T$_{sys}$ =60K value\nfor ALMA at 86GHz, increased by 15\\% and 25\\%, respectively, due to \nincreased sky contribution at 2200m. \\\\\n$^d$Under investigation. For much wider bandwidths, system temperatures are \nlikely to be larger. \\\\\n$^e$Noise in 1hour for given continuum bandwidth for a Clark\/Conway configuration \n(ngVLA memo 2 and 3) scaled to a maximum baseline of 300km,\nusing Briggs weighting with R=0. Using R=1 decreases the noise by a factor 0.87, \nand using R=-1 increases the noise by a factor 2.5. \\\\\n$^f$Synthesized beam for a Clark\/Conway configuration scaled to a\nmaximum baseline of 300km, using Briggs weighting with R=0. For R=1, the beam size increases\nby a factor 1.36, and for R=-1 the beam size decreases by a factor 0.63. \\\\\n$^g$Continuum brightness temperature corresponding to point source sensitivity (row 6) and resolution of Clark\/Conway configuration, using Briggs weighting with R = 0 (row 8). \\\\\n$^h$Line rms in 1hr, 10 km s$^{-1}$, after tapering to $1\"$ resolution for the Clark\/Conway configuration. \\\\\n$^i$Line brightness temperature rms in 1hr, 10 km s$^{-1}$, after tapering to $1\"$ resolution for the Clark\/Conway configuration. \\\\\n\\end{table}\n\n\\subsection{VLBI implementation}\n\nThe science white papers present a number of compelling VLBI\nastrometric science programs made possible by the increased\nsensitivity of the ngVLA. These include: Local Group cosmology\nthrough measurements of proper motions of nearby galaxies, delineation\nof the full spiral structure of the Milky Way, and measuring the\nmasses of supermassive black holes and H$_0$.\n\nThe exact implementation of interferometry with the\nngVLA on baselines longer than the nominal 300km array remains under\ninvestigation. These astrometric programs require excellent\nsensitivity per baseline, but may not require dense coverage of the UV\nplane, since high dynamic range imaging may not be required.\n\nOne possible implementation would be to use the ngVLA as an\nultra-sensitive, anchoring instrument, in concert with radio \ntelescopes across the globe. Such a model would parallel the\nplanned implementation for submm VLBI, which employs the\nultra-sensitive phased ALMA, plus single dish submm telescopes around\nthe globe, to perform high priority science programs, such as imaging\nthe event horizons of supermassive black holes (Akiyama et\nal. 2015). A second possibility would be to include out-lying stations\nwithin the ngVLA construction plan itself, perhaps comprising up to\n20\\% of the total area out to trans-continental baselines. The cost,\npracticability, and performance of different options for VLBI will be\nstudied in the coming year.\n\n\\section{New Parameter Space}\n\nFigure 1 shows one slice through the parameter space covered by the\nngVLA: resolution versus frequency, along with other existing and\nplanned facilities. The maximum baselines of the ngVLA imply a\nresolution of better than 10mas at 1cm. As we shall see below, coupled\nwith the high sensitivity of the array, this resolution provides a\nunique window into the formation of planets in disks on scales of our\nown Solar system at the distance of the nearest active star forming\nregions, eg. Taurus and Ophiucus.\n\nFigure 2 shows a second slice through parameter space: effective\ncollecting area versus frequency. In this case, we have not included\nmuch higher and lower frequencies, eg. the SKA-1 will extended\nto much lower frequency (below 100MHz, including SKA-Low), while ALMA\nextends up to almost a THz.\n\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[scale=0.32]{angularresfreq_oct20.png}\n\\end{center}\n\\caption{\\footnotesize \\em{Spatial resolution versus frequency set by the \nmaximum baselines of the ngVLA, and\nother existing and planned facilities across a broad range of\nwavelengths. }}\n\\end{figure}\n\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[scale=0.32]{area_freq_log_oct19.png}\n\\end{center}\n\\caption{\\footnotesize \\em{Effective collecting area versus frequency for the ngVLA,\nand other existing or planned facilities operating in a comparable\nfrequency range. We have not included much higher and lower\nfrequencies, eg. the SKA-1 will extended to below 100MHz\n(including SKA-Low), while ALMA extends up to close to a THz. }}\n\\end{figure}\n\nGiven the collecting area and reasonable receiver performance (Table\n1), the ngVLA will achieve sub-$\\mu$Jy sensitivity in the continuum in\n1 hour at 1cm (30GHz). This implies that, at 1cm, the ngVLA will\nobtain 6K brightness temperature sensitivity with 9mas resolution in\njust 1 hour!\n\nWe note that there are other aspects of telescope phase space that are\nrelevant, including field of view and mapping speed, configuration and\nsurface brightness sensitivity, bandwidth, T$_{sys}$, etc... Given\nthe early stage in the design, we have presented the two principle and\nsimplest design goals, namely, maximum spatial resolution and total\neffective collecting area. A deeper consideration of parameter space\nwill depend on the primary science drivers that emerge in the coming\nyears.\n\n\\section{Science Examples}\n\nIn the following, we highlight some of the science that is enabled by\nsuch a revolutionary facility. These three areas are among the high\npriority goals identified by the science working groups, and in\nparticular, these are the goals that have been best quantified to\ndate. We note that the most important science from such a\nrevolutionary facility is difficult to predict, and perhaps the most\nimportant aspect of the science analysis is simply the large volume of\nunique parameter space opened by the ngVLA (Figs 1 and 2).\n\n\\subsection{Imaging terrestrial-zone planet formation}\n\nWith the discovery of thousands of extrasolar planets, and the first\nhigh resolution images of protoplanetary disks with ALMA, the field of\nextrasolar planets and planet formation has gone from rudimentary\nstudies, to a dominant field in astrophysics, in less than a\ndecade. This remarkable progress promises to continue, as ALMA comes\ninto full operation, and with future space missions targetting planet\ndetection, such as the High Definition Space Telescope, for which the\nprimary science goals are direct imaging of terrestrial planets and\nthe search for atmospheric bio-signatures.\n\nThe first high resolution images from ALMA of the protoplanetary disk\nin HL Tau are clearly game-changing (Brogan et al. 2015). The ALMA\nimages show a dust disk out to 100AU radius, with a series of gaps at\nradii ranging from 13 AU to 80AU. These gaps may correspond to\nthe formation zones of planets. Coupled with JVLA imaging at longer\nwavelengths, these HL Tau images usher in a new era in the study of\nplanet formation.\n\nWhile revolutionary, there are limitations to the current\ncapabilities of ALMA and the JVLA in the study of protoplanetary\ndisks. First, for ALMA, the inner 10AU of protoplanetary disks like HL\nTau become optically thick at wavelengths of 3mm and shorter. Second,\nfor the JVLA, the sensitivity and spatial resolution are insufficient\nto image the terrestrial-zone of planet formation at the longer\nwavelengths where the disks become optically thin.\n\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{PPdisk_2.png}\n\\end{center}\n\\caption{\\footnotesize \\em{Models and images of a $\\sim 1$Myr old protoplanetary\ndisk, comparable to HL Tau, at a distance of 130pc. This 'minimum mass\nsolar nebula disk' has a mass of 0.1M$_\\odot$ orbiting a 1 M$_\\odot$ star.\nThe model includes the formation of a Jupiter\nmass planet at 13AU radius, and Saturn at 6AU.\nThe left frame shows the model emission at 100GHz, the center frame\nshows the 25GHz model, and the right shows the ngVLA image for a\n100hour observation at 25GHz with 10mas resolution. The noise \nin the ngVLA image is 0.1$\\mu$Jy, corresponding to 1K at 10mas resolution. }}\n\\end{figure}\n\nThe ngVLA solves both of these problems, through ultra-high sensitivity\nin the 0.3cm to 3cm range, with milliarcsecond resolution. Figure 3\nshows a simulation of the ability of the ngVLA to probe the previously\ninaccessible scales of 1AU to 10AU. This simulation involves an\nHL-Tau like protoplanetary disk, including the formation of a Jupiter\nmass planet at 13AU radius, and Saturn at 6AU. Note that the inner\nring caused by Saturn is optically thick at 3mm. However, this inner\ngap is easily visible at 25GHz, and well imaged by the\nngVLA. Moreover, the ngVLA will have the sensitivity and resolution to\nimage circum-planetary disks, ie. the formation of planets themselves\nvia accretion. In parallel, the ngVLA covers the optimum frequency\nrange to study pre-biotic molecules, including rudimentary amino acids\nsuch as glycine (see Isella et al. 2015 for more details).\n\n{\\sl Next Generation Synergy:} The High Definition Space\nTelescope has made its highest priority goals the direct imaging of\nterrestrial-zone planets, and detection of atmospheric biosignatures.\nThe ngVLA provides a perfect evolutionary compliment to the HDST\ngoals, through unparalleled imaging of terrestrial zone planet\nformation, and the study of pre-biotic molecules.\n\n\\subsection{The dense gas history of the Universe}\n\nUsing deep fields at optical through radio wavelengths, the evolution\nof cosmic star formation and the build up of stellar mass have been\ndetermined in exquisite detail, from the epoch of first light (cosmic\nreionization, $z > 7$), through the peak epoch of cosmic star\nformation ('epoch of galaxy assembly', $z \\sim 1$ to 3), to the\npresent day (Madau \\& Dickinson 2014). However, these studies reveal\nonly one aspect of the baryonic evolution of galaxies, namely, the\nstars. What is currently less well understood, but equally important,\nis the cosmic evolution of the cool, molecular gas out of which stars\nform. Initial in-roads into the study of the cool gas content of\ngalaxies has been made using the JVLA, GBT, Plateau de Bure, and now\nALMA. These initial studies have shown a profound change in the\nbaryonic content of star forming galaxies out to the epoch of galaxy\nassembly: the gas baryon fraction (the gas to stellar mass ratio)\nincreases from less than 10\\% nearby, to unity, or larger, at $z \\sim\n2$ to 3 (Genzel et al. 2015, Carilli \\& Walter 2013). \nThis profound change in galaxy properties with redshift is\nlikely the root-cause of the evolution of the cosmic star formation\nrate.\n\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{dncc_fig2.png}\n\\end{center}\n\\caption{\\footnotesize \\em{\nLeft: A model of the integrated CO 1-0 emission from a\nmassive z=2 galaxy from the cosmological zoom simulations of Narayanan\net al. (2015). The total SFR $= 150$ M$_\\odot$ year$^{-1}$, and the stellar\nmass = $4\\times 10^{11}$ M$_\\odot$. The native resolution (pixel size)\nis 30mas, and the peak brightness temperature is 14K. The fainter\nregions have T$_B \\ge 0.1$K. Right: the ngVLA image of\nthe field assuming a 8 x 5hour synthesis using only antennas within a\n15km radius (about 50\\% of the full array for the Clark\/Conway\nconfiguration, and using Briggs weighting\nwith R=0.5. The rms noise is 5$\\mu$Jy beam$^{-1}$, and the beam size\nis $0.11\"$. One tick mark = $1\"$. The peak surface brightness is 0.18\nmJy beam$^{-1}$. \n}}\n\\end{figure}\n\nHowever, studies of the gas mass in early galaxies, typically using\nthe low order transitions of CO, remain severely sensitivity limited,\nrequiring long observations even for the more massive galaxies.\nThe sensitivity and resolution of the ngVLA opens a new window on the\ngas properties of early galaxies, through efficient large cosmic\nvolume surveys for low order CO emission, and detailed imaging of gas\nin galaxies to sub-kpc scales (see Casey et al. 2015). The ngVLA will\ndetect CO emission from tens to hundreds of galaxies per hour in\nsurveys in the 20GHz to 40GHz range. In parallel, imaging of the gas\ndynamics will allow for an empirical calibration of the CO luminosity\nto gas mass conversion factor at high redshift.\n\nFigure 4 shows a simulation of the CO 1-0 emission from a \nmassive z=2 galaxy from the cosmological zoom simulations of Narayanan\net al. (2015), plus the ngVLA simulated image. The ngVLA reaches an\nrms noise of 5$\\mu$Jy beam$^{-1}$ (over 9MHz bandwidth and 40hours),\nand the beam size is $0.11\"$ = 0.9kpc at z=2, only using antennas\nwithin 15km radius of the array center. The ngVLA can detect the large\nscale gas distribution, including tidal structures, streamers,\nsatellite galaxies, and possible accretion. Note that the rms\nsensitivity of the ngVLA image corresponds to an H$_2$ mass\nlimit of $3.3\\times 10^8$ ($\\alpha$\/4) M$_\\odot$. Further, the ngVLA\nhas the resolution to image the gas dynamics on scales approaching\nGMCs. For comparison, the JVLA in a similar integration time would\nonly detect the brightest two knots at the very center of galaxy,\nwhile emission from the high order transitions imaged by ALMA misses\nthe extended, low excitation, diffuse gas in the system. \n\n{\\sl Next Generation Synergy:} With new facilities such as\nthirty-meter class optical telescopes, the JWST, and ALMA, study of\nthe stars, ionized gas, and dust during the peak epochs of galaxy\nformation, will continue to accelerate. The ngVLA sensitivity and\nresolution in the 0.3cm to 3cm window is the required complement to\nsuch studies, through observation of the cool gas out of which stars\nform throughout the Cosmos.\n\n\\subsection{Ultra-sensitive, wide field imaging}\n\nScience working group 2 (`Galaxy ecosystems'; Leroy etal. 2015)\nemphasized the extraordinary mapping speed of the ngVLA in line and\ncontinuum, for study of the gas and star formation in the nearby\nUniverse. The frequency range of the ngVLA covers, simultaneously,\nmultiple continuum emission mechanisms, from synchrotron, to\nfree-free, to cold (or spinning) dust. These mechanisms are key\ndiagnostics of star formation, cosmic rays, magnetic fields, and other\nimportant ISM properties. This range also covers low order and maser\ntransitions of most astrochemically important molecules, such as CO,\nHCN, HCO$^+$, NH$_3$, H$_2$O, CS...\n \nFigure 5 shows an ngVLA simulation of the thermal free-free emission\nin the 30GHz band from a star forming galaxy at 27Mpc distance, with a\nmoderate star formation rate of 4 M$_\\odot$ year$^{-1}$. The ngVLA\nwill image the free-free emission with a sensitivity adequate to\ndetect an HII region associated with a single O7.5 main sequence star\nat the distance of the Virgo cluster! In general, the combination of\nspectral and spatial resolution will allow for decomposition\nof the myriad spectral lines, and various continuum emission\nmechanisms, on scales down to a few parsecs at the distance of Virgo,\nthereby enabling Local-Group-type science throughout the local\nsupercluster.\n\n\\begin{figure}[tbh]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{FF.png}\n\\end{center}\n\\caption{\\footnotesize \\em{Left: a model for the thermal free-free\nemission from NGC 5713 at a distance of 27Mpc with \na SFR = 4 M$_\\odot$ year$^{-1}$. The model was estimated from H$\\alpha$ imaging\nat a native resolution of 2$\"$. The peak brightness temperature is \n150mK, and the fainter knots are about 1mK. Right: The ngVLA image\nfor 10hrs integration, with a bandwidth of 20GHz, centered at 30GHz. \nThe rms is 0.1$\\mu$Jy beam$^{-1}$. \nNote that the ngVLA image has been restored with a beam of \n$0.5\"$. }}\n\\end{figure}\n\n\\subsection{Exploring the Time Domain}\n\nThe ngVLA is being designed for optimal exploitation of the time\ndomain. Fast triggered response modes on minute timescales will be\nstandard practice. Commensal searches for ultra-fast transients, such\nas Fast Radio Bursts or SETI signals, will also be incorporated into\nthe design. And monitoring of slow transients, from novae to AGN, will\nbe possible at unprecedented sensitivities, bandwidths, and angular\nresolutions. The 2cm and shorter capabilities will be complimentary to\nthe SKA-1 at longer wavelengths, in particular for the broad band\nphenomena typical of fast and slow transients.\n\nThe broad band coverage and extreme sensitivity of the ngVLA provides\na powerful tool to search for, and characterize, the early time\nemission from processes ranging from gravity wave EM counter-parts to\ntidal disruption events around supermassive black holes as well as\nprobing through the dense interstellar fog in search of Galactic\nCenter pulsars. The system will also provide unique insights into\nvariable radio emission associated with 'exo-space weather,' such as\nstellar winds, flares, and aurorae. Moreover, many transient phenomena\npeak earlier, and brighter, at higher frequencies, and full spectral\ncoverage to high frequency is required for accurate calorimetry. Full\npolarization information will also be available, as a key diagnostic\non the physical emission mechanism and propagation effects. \n\n\\vspace{0.5cm}\n\nWe invite the reader to investigate the science programs in more\ndetail in the working group reports, as well as to participate in the\npublic forums and meetings in the on-going development of the ngVLA\nscience case.\n\n\\section*{Acknowledgments}\n\nThe National Radio Astronomy Observatory is a facility of the National\nScience Foundation operated under cooperative agreement by Associated\nUniversities, Inc.\n\n\\vskip 0.2in\n\n\\noindent{\\sl References}\n\n\\noindent Akiyama, K. et al. 2015, ApJ, 807, 150\n\n\\noindent Bower, G. et al. 2015, {\\sl Next Generation VLA memo. No. 9}\n\n\\noindent Brogan, C. et al. 2015, ApJ, 808, L3\n\n\\noindent Butler, B. 2002, VLA Test Memo 232\n\n\\noindent Carilli, C. 2015, {\\sl Next Generation VLA memo. No. 1}\n\n\\noindent Carilli, C. \\& Walter, F. 2013, ARAA, 51, 105\n\n\\noindent Casey, C. et al. 2015, {\\sl Next Generation VLA memo. No. 8}\n\n\\noindent Clark, B. \\& Brisken, W. 2015, {\\sl Next Generation VLA memo. No. 3}\n\n\\noindent Clark, B. 2015, {\\sl Next Generation VLA memo. No. 2}\n\n\\noindent Genzel, R. et al. 2015, ApJ, 800, 20\n\n\\noindent Isella, A. et al. 2015, {\\sl Next Generation VLA memo. No. 6}\n\n\\noindent Lal, D., Lobanov, A., Jimenez-Monferrer, S. 2011, \nSKA Design Studies Technical Memo 107\n\n\\noindent Madau, P. \\& Dickinson, M. 2014, ARAA, 52, 415\n\n\\noindent Leroy, E. et al. 2015, {\\sl Next Generation VLA memo. No. 7}\n\n\\noindent Narayanan, D. et al. 2015, Nature, 525, 496\n\n\\noindent Owen, F. 2015, {\\sl Next Generation VLA memo. No. 4}\n\n\\end{document}\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction.} \nThe Fourier coefficients $a(F,n)$ of a cusp form $F$ of integral weight $k$ for\nthe group $\\Gamma_0(M)$ are bounded from above by\n$\\sigma_0(n)n^\\frac{k-1}{2}$ if $F$ is a primitive form (also called\nnormalized newform) by the famous Ramanujan-Petersson-Deligne\nbound. For applications one often needs bounds for an arbitrary cusp\nform which is a linear combination of old and new forms. Such bounds\nhave first been given in special cases in \\cite{fomenko1,fomenko2}.\nThe first step for this is the construction of an explicit\northogonal basis for the space $S_k(\\Gamma_0(M), \\chi)$. \nStarting from\nthe usual basis of translates of primitive forms and using the well\nknown fact that translates of\ndifferent primitive forms are pairwise orthogonal, one is left with\nthe task to orthogonalize the translates of the same primitive\nform, in particular, one has to compute their Petersson scalar products.\n\nChoie and Kohnen in \\cite{choie-kohnen} and Iwaniec, Luo, Sarnak in\n\\cite{iwa_luo_sar} cover arbitrary integral weights, square free\nlevel and trivial character, using Rankin $L$-functions for the\ncomputation of the Petersson products of translates of a primitive form. By the same\nmethod, Rouymi \\cite{rouymi} treated prime power level and trivial\ncharacter. His approach was generalized to arbitrary levels and\ntrivial character by Ng Ming Ho in his unpublished master thesis \n\\cite{ngmingho}. Blomer and Mili\\'{c}i\\'{c} in \\cite{blomer_milicic} treat\nMaa\\ss forms and holomorphic modular forms for arbitrary level and\ntrivial character by the same method.\n\nIn this note we investigate the case of arbitrary level and arbitrary\ncharacter with a rather elementary \napproach. \nIn order to compute the Petersson product of two translates of the\nsame primitive form we use the trace operator sending a form of level $M$ to a\nform of level $N$ dividing $M$. Together with the well known fact that the $p$-th Hecke\noperator on forms of level $N$ can be obtained by first translating the argument by a factor\n$p$ and then applying the trace operator from level $Np$ down to level\n$N$ this allows us to express the scalar products quite easily in\nterms of Hecke eigenvalues of the underlying primitive form. The formulas we get and the relations between\nthe Hecke eigenvalues $\\lambda_f(1,p^j)$ of a primitive form $f$ for\nvarying $j$ imply then that each element of the orthogonal basis\nobtained by the Gram-Schmidt procedure involves\nonly very few of the translates of its underlying primitive form. For\nforms of half integral weight our approach works in essentially the\nsame way as far as the computation of the Petersson product of a Hecke\neigenform with its translates is concerned. Since the theory of\nnewforms is in this case completely known only for the Kohnen plus\nspace in square free level, it is however not clear how large the part\nof the space of all cusp forms of a given arbitrary level is that is\ncovered by our result.\n\nWe then use in the integral weight case the orthogonal basis to obtain \nan explicit bound for the Fourier coefficient $a(F,n)$ of an arbitrary\ncusp form $F$ in terms of the Petersson norm $\\langle\nF,F \\rangle$ and the level $M$.\n\nIn applications to the theory of integral quadratic forms it is\nusually possible to compute or at least bound $\\langle F,F \\rangle$ for the cusp form\n$F$ at hand (the difference between a genus theta series and a theta\nseries), so that our result is directly applicable to such problems;\nthis will be worked out separately.\n\nAn estimate for the Fourier coefficients in the half integral weight\ncase could in principle be obtained in the same way as in the integral\nweight case discussed above as long as one has an explicit bound for\nthe Fourier coefficients with square free index of a Hecke\neigenform. Unfortunately most of the known estimates (see\n\\cite[Appendix 2]{blomer_michel_mao} involve\nconstants which are not explicitly known, and we prefer not to discuss\nthis possibility in detail in the present paper.\n\nThis article is an extension of work from the master thesis of the\nsecond named author at Universit\u00e4t des Saarlandes, 2014.\n\nAfter the first version of this article was posted in the matharxiv Ng\nMing Ho sent us his master thesis, from which we also learnt of the\nprevious work of Iwaniec, Luo and Sarnak and of Rouymi. \nWe thank Ng Ming Ho for providing this information to us.\n \\section{Trace operator and scalar products} \nLet $N\\mid M$ be integers and let $\\chi$ be a Dirichlet character modulo $N$; we denote the Dirichlet\ncharacter modulo $M$ induced by it by $\\chi$ as well. We have induced characters on the groups \n$\\Gamma_0(N)$, $\\Gamma_0(M)$ given by\n \\begin{equation*} \n\\begin{pmatrix} a & b \\\\ c & d \\end{pmatrix} \\longmapsto \\chi(d) \n \\end{equation*}\nas usual and denote these again by $\\chi$.\n\nFor an integer $k$ we denote by\n$M_k(\\Gamma_0(N),\\chi),S_k(\\Gamma_0(N),\\chi)$ the spaces of modular\nforms respectively cusp forms of weight $k$ and character $\\chi$ for\nthe group $\\Gamma_0(N)$. On $S_k(\\Gamma_0(N),\\chi)$ we consider the\nPetersson inner product given by\n\\begin{equation*}\n\\langle f,g\\rangle:=\\langle f,g\\rangle_{\\Gamma_0(N)}:=\\frac{1}{(SL_2({\\mathbb Z}):\\Gamma_0(N))}\\int_{\\mathcal F}f(x+iy)\\overline{g(x+iy)}y^{k-2}dxdy, \n\\end{equation*}\nwhere ${\\mathcal F}$ is a fundamental domain for the action of\n$\\Gamma_0(N)$ on the upper half plane $H\\subseteq {\\mathbb C}$ by fractional\nlinear transformations. The normalization chosen implies that for\n$N\\mid M$ and $f,g \\in S_k(\\Gamma_0(N),\\chi)\\subseteq\nS_k(\\Gamma_0(M),\\chi)$ we have $\\langle f,g\n\\rangle_{\\Gamma_0(N)}=\\langle f,g \\rangle_{\\Gamma_0(M)}$.\n\nFor $\\gamma=\\bigl(\n\\begin{smallmatrix}\n a&b\\\\c&d\n\\end{smallmatrix}\\bigr)\n\\in GL_2({\\mathbb R})$ with $\\det(\\gamma)>0$ we write as usual\n$f\\vert_k\\gamma(z)=\\det(\\gamma)^{k\/2}(cz+d)^{-k}f(\\frac{az+b}{cz+d})$.\n \n\\medskip\nWe define trace operators as in \\cite{kume,bfsp}:\n \\newpage\n \\begin{definition}\nFor $N\\mid M$ and $\\chi$ as above we put for $f \\in M_k(\\Gamma_0(M),\\chi)$:\n \\begin{equation*} \n f|_k {\\rm tr}_N^M = \\frac{1}{(\\Gamma_0(N):\\Gamma_0(M))} \\sum_{i} \\overline{\\chi(\\alpha_i)} f\\vert_k \\alpha_i,\n \\end{equation*}\nwhere $\\Gamma_0(N) = {\\underset{i}{\\stackrel{\\cdot}{\\bigcup}}} \\,\\Gamma_0(M) \\alpha_i$ is a disjoint coset\ndecomposition.\n \\end{definition}\n\n \\begin{lemma}\nThe definition above is independent of the choice of coset representatives. \nOne has $f|_k {\\rm tr}_N^M \\in M_k(\\Gamma_0(N),\\chi)$ and $f|_k {\\rm tr}_N^M \\in S_k(\\Gamma_0(N),\\chi)$ if $f$ is\ncuspidal.\n \\end{lemma}\n\n \\begin{proof} \nThis is a routine calculation, see e.g. \\cite[Prop. 2.1]{bfsp}.\n \\end{proof}\n\n \\begin{lemma} \\label{trace-skp}\nWith notations as above one has for $f \\in S_k(\\Gamma_0(N),\\chi)$, $g \\in S_k(\\Gamma_0(M),\\chi)$:\n \\begin{equation*} \n \\langle f,g \\rangle = \\langle f,g~|_k ~{\\rm tr}_N^M \\rangle,\n \\end{equation*}\nwhere the Petersson product on the left hand side is with respect to\n$\\Gamma_0(M)$ and that on the right hand side is with respect to $\\Gamma_0(N)$.\n \\end{lemma}\n\n \\begin{proof}\nOne has for $\\alpha_i \\in \\Gamma_0(N)$:\n\\begin{align*}\n \\langle f,\\overline{\\chi}(\\alpha_i)g|_k \\alpha_i \\rangle & = \\langle \\chi(\\alpha_i)f|_k \\alpha_i^{-1}),g \\rangle\\\\\n &= \\langle f,g\\rangle ,\n \\end{align*}\nwhich implies the assertion.\n \\end{proof}\n\n \\begin{definition}\n Let $\\gcd(\\ell,N) = 1$\n \\begin{itemize} \n \\item[a)] With $\\delta_{\\ell} := \\begin{pmatrix} \\ell & 0 \\\\ 0 & 1 \\end{pmatrix} \\in {\\rm GL}_2^+(\\mathbb Q)$ we put\n \\begin{equation*} \n f|_k V_{\\ell}(z) := f(\\ell z) = \\ell^{-k\/2} f|_k \\delta_{\\ell} (z)\n \\end{equation*}\nfor $f \\in M_k(\\Gamma_0(N),\\chi)$.\n \\item[b)] For $\\ell \\mid m$ we denote by $T_N(\\ell,m)$ the Hecke operator given by the double coset $\\Gamma_0(N) \\begin{pmatrix} \\ell & 0 \\\\ 0 & m\\end{pmatrix} \\Gamma_0(N)$.\n \\item[c)] For $\\ell\\mid m$ we denote by $T^{\\ast}_N(m,\\ell)$ the Hecke operator given by the double coset \n$\\Gamma_0(N) \\begin{pmatrix} m & 0\\\\ 0 & \\ell \\end{pmatrix} \\Gamma_0(N)$.\n \\end{itemize}\n \\end{definition}\n\n \\begin{bem}\n \\begin{itemize}\n \\item[a)] It is well-known (see \\cite[\\S 4.5]{Miy}) that on spaces of cusp forms of level $N$ the\noperator $T^{\\ast}(m,\\ell)$ is adjoint to $T(m,\\ell)$ with respect to the Petersson inner product.\n \\item[b)] As usual we write\n \\begin{equation*} \n T_N(n) = \\sum_{\\ell m = n}T_N(\\ell,m),\\: T_N^{\\ast}(n) = \\sum_{\\ell m = n} T_N^{\\ast} (m,\\ell).\n \\end{equation*}\n \\end{itemize}\n \\end{bem}\n\n \\begin{lemma} \\label{trace-hecke} \nLet $f \\in S_k(\\Gamma_0(N),\\chi)$ and $d \\in \\mathbb N$. Then\n \\begin{equation*} \n (\\Gamma_0(N):\\Gamma_0(Nd))(f|_k V_d)~|_k~{\\rm tr}_N^{Nd} = \\frac{1}{d^{k-1}} f|_k T_N^{\\ast}(d,1).\n \\end{equation*}\n \\end{lemma}\n\n \\begin{proof}\nPutting $\\Gamma_0(N) = {\\underset{i}{\\stackrel{\\cdot}{\\bigcup}}} \\Gamma_0(Nd)\\alpha_i$ we have \n(using $\\delta_d^{-1} \\Gamma_0(N){\\delta_d} \\cap \\Gamma_0(N) =\n\\Gamma_0(Nd)$ and the proof of Prop. 3.1 of \\cite{shimura_book}):\n\n \\begin{align*}\n f|_k T_N^{\\ast}(d,1)& = d^{k-1} \\sum_i \\overline{\\chi(\\alpha_i)} (d^{-k\/2}f|_k \\delta_d)|_k \\alpha_i\\\\\n & = d^{k-1}\\sum_i \\overline{\\chi(\\alpha_i)} (f|_k V_d)|_k \\alpha_i\\\\\n &=(\\Gamma_0(N):\\Gamma_0(Nd)) d^{k-1}(f|_k V_d)|_k {\\rm tr}_N^{Nd}.\n \\end{align*}\n \\end{proof}\n\n \\begin{theorem}\\label{gram_matrix_theorem}\nLet $f \\in S_k(\\Gamma_0(N),\\chi)$ be a primitive form, let $m,n \\in \\mathbb N$ with $\\gcd(m,n) = d$. \nThen\n \\begin{equation*}\n\\langle f|_k V_m,f|_kV_n\\rangle = \\frac{\\lambda(1,\\frac{n}{d})\\overline{\\lambda(1,\\frac{m}{d})}}\n{ (\\frac{mn}{d})^k \\underset{p|\\frac{mn}{d^2} \\atop p\\nmid N}{\\prod} (1+\\frac{1}{p})} \\langle f,f \\rangle ,\n \\end{equation*}\nwhere we denote by $\\lambda(1,\\frac{n}{d})$ the $T(1,\\frac{n}{d})$-eigenvalue of $f$ (and analogously\nfor $\\lambda(1,\\frac{m}{d})$).\n \\end{theorem}\n\n \\begin{proof}\n Since we have\n \\begin{align*}\n \\langle f|_k V_m, f|_k V_n \\rangle &= \\langle f|_k V_{m\/d} |_k V_d,f|_k V_{n\/d}|_k V_d \\rangle\\\\\n &=d^{-k} \\langle f|_k V_{m\/d}|_k \\delta_d,f|_k V_{n\/d}|_k \\delta_d \\rangle \\\\\n &= d^{-k} \\langle f|_k V_{m\/d},f|_k V_{n\/d}\\rangle,\n \\end{align*}\n\nwe can restrict attention the case\n \\begin{equation*}\nd= \\gcd(m,n) = 1.\n \\end{equation*}\nIn that case we have \n\n\\begin{align*}\n & \\langle f|_k V_m,f|_k V_n \\rangle = \\langle f|_k V_m,f|_k V_n ~|~ {\\rm tr}_{mN}^{mnN} \\rangle\\\\\n&= \\frac{1}{(\\Gamma_0(mN):\\Gamma_0(mnN))} \\frac{1}{n^{k-1}} \\langle f|_k V_m, f|_k T_{mN}^{\\ast} (n,1) \\rangle,\n \\end{align*}\nwhere we used Lemma \\ref{trace-skp} and Lemma \\ref{trace-hecke}.\n \\medskip\nWe split $n$ as $n=\\tilde{n}n'$ with $\\gcd(\\tilde{n},N)=1$ and $n'|N^{\\infty}$ (i.e., $n'$ is divisible \nonly by primes dividing $N$) and have \n\n\\begin{align*}\n T_{mN}^{\\ast} (n,1) &= T_{mN}^{\\ast}(\\tilde{n},1) T_{mN}^{\\ast}(n',1)\\\\\nf|_k T_{mN}^{\\ast}(\\tilde{n},1) &= f|_kT_N^{\\ast}(\\tilde{n},1)\\\\\n&= \\overline{\\lambda(1,\\tilde{n})} f\n \\end{align*}\nsince $T_N^{\\ast}(\\tilde{n},1)$ is adjoint to $T_N^{\\ast}(1,\\tilde{n})$; in the same way we see \n\n \\begin{align*}\n f|_k T_{mN}^{\\ast} (n',1)&= f|_k T_N^{\\ast}(n',1)\\\\\n &=\\overline{\\lambda(1,n')}f.\n \\end{align*}\nThis gives us\n\n \\begin{align*}\n \\langle f|_k V_m,f|_k V_n\\rangle &= \\frac{1}{n^{k-1}(\\Gamma_0(mN): \\Gamma_0(mnN))} \\cdot\n \\lambda(1,\\tilde{n}) \\lambda (1,n') \\langle f|_k V_m,f \\rangle\\\\\n &= \\frac{\\lambda(1,n)}{n^{k-1}(\\Gamma_0(mN):\\Gamma_0(mnN))} \\overline{\\langle f,f|_k V_m \\rangle}.\n \\end{align*}\nIn particular, we get\n \\begin{equation*}\n \\langle f,f|_k V_m\\rangle = \\frac{\\lambda(1,m)}{(\\Gamma_0(N):\\Gamma_0(mN))m^{k-1}}\n \\overline{\\langle f,f \\rangle},\n \\end{equation*}\nand thus (computing the group index in the denominator )\n\n \\begin{align*}\n \\langle f|_k V_m,f|_k V_n\\rangle &= \\frac{\\lambda(1,n)\\overline{\\lambda(1,m)}}{(mn)^{k-1}(\\Gamma_0(N):\\Gamma_0(mN))}\n \\langle f,f \\rangle\\\\\n &= \\frac{\\lambda(1,n) \\overline{\\lambda(1,m)}}{(mn)^k \\underset{p|mn\n \\atop p\\nmid N}{\\prod} (1+\\frac{1}{p})}\n \\langle f,f \\rangle\n \\end{align*}\nas asserted.\n \\end{proof}\n\n \\section{Orthogonal bases for spaces of cusp forms}\nThe formulas for the Petersson products derived in the previous\nsection allow to construct an orthogonal basis by Gram Schmidt\northogonalization. As we learnt from Ng Ming Ho after version one of\nthis article was posted, this has been done for trivial character in\n\\cite{rouymi} for prime power level \nand in \\cite{ngmingho} for general level. For the sake of completeness\nand since \\cite{ngmingho} is at present not published we give here our\nversion of it.\n\n\\smallskip\nWe recall first the well-known fact (see e.g. \\cite[Lemma 4.6.9]{Miy} that the space $S_k(\\Gamma_0(M),\\chi)$ \nhas a basis consisting of the $f|_{V_{\\ell}}$, where $f$ runs over the primitive forms (normalized Hecke\neigenforms) of levels $N\\mid M$ where $N$ is divisible by the conductor of $\\chi$, and where $\\ell$ is\na positive integer such that $\\ell N$ divides $M$. We will call this basis the basis of translates of\nnewforms.\n\n \\begin{lemma}\\label{product_decomposition-Lemma}\nLet $f\\in S_k(\\Gamma_0(N,\\chi))$ be a primitive form, let $m_1,m'_1,m_2,m'_2$ be positive integers with\n$ \\gcd(m_1m'_1,m_2m'_2)=1$\nput $\\tilde{f} = \\frac{f}{\\sqrt{\\langle f,f \\rangle}}$. Then\n \\begin{align*}\n \\langle \\tilde{f}|_k V_{m_1},\\tilde{f}|_k V_{m'_1} \\rangle \\cdot \\langle \\tilde{f}|_k V_{m_2},\\tilde{f}|_k V_{m'_2} \n \\rangle \\\\\n = \\langle \\tilde{f}|_k V_{m_1m_2},\\tilde{f}|_k V_{m'_1m'_2} \\rangle\\,.\n \\end{align*}\n \\end{lemma}\n\n \\begin{proof}\nThis follows directly from the theorem above.\n \\medskip\n\\end{proof}\nIt is well-known that for primitive forms $f \\not= g$ all translates of $f$ by some $V_{m'}$\nare orthogonal to all translates of $g$ by some $V_{m'}$. Our lemma above shows that for a primitive form\n$f \\in S_k(\\Gamma_0(N),\\chi)$ for some $N\\mid M$ the space of translates of $f$ in $S_k(\\Gamma_0(M),\\chi)$ is \nisometric (with respect to Petersson norms) to the tensor products of the spaces $W_{p_i}^{(f)}$ for the\n$p_i|\\frac{M}{N}$ consisting of $p_i$-power-translates of $f$. An isometry is given by the unique linear\nmap sending \n \\begin{equation*}\n \\tilde{f}|_{V_{p_1}^{r_1}} \\otimes \\cdots \\otimes \\tilde{f}|_{V_{p_z}^{r_z}} \\mbox{ to } \n \\tilde{f}|_{V_{p_1}^{r_1} \\cdots p_z^{n_z}},\n \\end{equation*}\nwhere $\\displaystyle \\tilde{f} = \\frac{f}{\\sqrt{\\langle f,f \\rangle}}$.\n \\medskip\n\nTo construct an orthogonal basis for $S_k(\\Gamma_0(M),\\chi)$ it suffices therefore to do that for\neach space $W_{p_i}$.\n \n\n \\begin{theorem}\\label{ogbasis_prime}\nLet $f \\in S_k(\\Gamma_0(N),\\chi)$ be a primitive form, put $\\tilde{f}=\\frac{f}{\\sqrt{\\langle f,f \\rangle}}$,\nlet $p$ be a prime number, $r \\in \\mathbb N$, let $W_p(f)$ be the space generated by\n$f,f|_{V_p},\\ldots,f_{V_{p^r}}$.\n \\medskip\n\n \\begin{itemize}\n \\item[a)] If $p\\mid N$ the space $W_p(f)$ has an orthogonal basis consisting of\n \\begin{equation*}\n g_0 = \\tilde{f}, \\quad g_j = p^{jk\/2}(\\tilde{f}|_{V_{p^j}} - \\frac{\\overline{\\lambda(1,p)}}{p^k} \n \\tilde{f}|_{V_{p^{j-1}}} )\n \\text{ for } 1 \\leq j \\leq r\n \\end{equation*}\nwith \n\\begin{eqnarray*}\n\\langle g_0,g_0 \\rangle& =& 1,\\\\\n\\langle g_j,g_j \\rangle& =&1-\\frac{|\\lambda(1,p)|^2}{p^k}\n\\text{ for }1 \\leq j \\leq r.\n\\end{eqnarray*}\n \\item[b)] If $p\\nmid N$ the space $W_p(f)$ has an orthogonal basis consisting of\n \\begin{eqnarray*}\n g_0 &=& \\tilde{f},\\\\\n g_1&=& p^{k\/2} \\tilde{f}|_k V_p - \\frac{\\overline{\\lambda(1,p)}}\n{p^{k\/2}(1+\\frac{1}{p})} \\tilde{f},\\\\\ng_j &=& p^{jk\/2}(\\tilde{f}|_k V_{p^j} - \\frac{\\overline{\\lambda(1,p)}}{p^k}\n \\tilde{f}|_k V_{p^{j-1}}+ \\frac{\\overline{\\chi(p)}}{p^{k+1}}\n \\tilde{f} |_k V_{p^{j-2}}) \\text{ for }2 \\leq j \\leq r\n \\end{eqnarray*}\nfor $2 \\leq j \\leq r$, with \n\\begin{eqnarray*}\n\\langle g_0,g_0 \\rangle &=&1,\\\\\n\\langle g_1,g_1 \\rangle &=& 1-\\frac{|\\lambda(1,p)|^2}\n {p^k(1+\\frac{1}{p})^2},\\\\\n\\langle g_j,g_j \\rangle &=& (1-\\frac{1}{p^2})(1- \\frac{|\\lambda(1,p)|^2}{p^k(1+\\frac{1}{p})^2})\n\\text{ for } 2 \\leq j \\leq r. \n\\end{eqnarray*}\n \\end{itemize}\n \\end{theorem}\n \n \\begin{proof}\na) In the case $p|N$ we have by Theorem \\ref{gram_matrix_theorem}\nfor $0 \\leq i \\leq j \\leq r$:\n \\begin{align*}\n \\langle \\tilde{f}|_k V_{p^i}, \\tilde{f}|_k V_{p^j} \\rangle &= p^{-ik} \\langle \\tilde{f},\\tilde{f}|_k V_{p^{j-i}} \\rangle\\\\\n &= p^{-ik} \\frac{\\lambda(1,p^{j-i})}{p^{(j-i)k}}\\\\\n &=p^{-jk}\\lambda(1,p^{j-i})\n \\end{align*}\nThis gives for $1 \\leq j \\leq r$\n\n \\begin{align*}\n \\langle g_0,g_j \\rangle &= p^{jk\/2} \\langle \\tilde{f},\\tilde{f}|_k V_{p^j} - \\frac{\\lambda(1,p)}{p^k}\n \\tilde{f}|_k V_{p^{j-1}}\\rangle\\\\\n&= p^{jk\/2} (\\frac{\\overline{\\lambda(1,p^j)}}{p^{jk}} - \\frac{\\overline{\\lambda(1,p)} \\overline{\\lambda(1,p^{j-1})}}\n {p^k p^{(j-1)k}})\\\\\n &= 0\\, ,\n\\end{align*}\nbecause of $\\lambda(1,p) \\lambda(1,p^{j-1}) = \\lambda(1,p^j)$ for $p|M$.\n \\medskip\n\nSimilarly, we see for $1 \\leq i < j \\leq r$ \n\n \\begin{align*}\n \\langle g_i,g_j \\rangle &= p^{(i+j)k\/2} \\langle \\tilde{f}|_k V_{p^i} - \\frac{\\overline{\\lambda(1,p)}}{p^k} \n \\tilde{f}|_k V_{p^{i-1}},\\tilde{f}|_k V_{p^j}-\\frac{\\overline{\\lambda(1,p)}}{p^k} \\tilde{f}|_k V_{p^{j-1}} \\rangle\\\\\n =& p^{(i+j)k\/2}(p^{-jk} \\lambda(1,p^{j-i})+ \\frac{|\\lambda(1,p)|^2}{p^{2k}} p^{-(j-1)k} \\lambda(1,p^{j-1})\\\\\n &- \\frac{\\overline{\\lambda(1,p)}}{p^k} p^{-jk} \\lambda(1,p^{j-i+1})-\n \\frac{\\lambda(1,p)}{p^k} p^{-(j-1)k} \\lambda(1,p^{j-i-1}))\\\\ \n =& 0\n \\end{align*}\nbecause of $\\lambda(1,p) \\lambda(1,p^{j-i-1}) = \\lambda(1,p^{j-1})$ and\n$\\overline{\\lambda(1,p)} \\lambda(1,p^{j-1+1}) = \\overline{\\lambda(1,p)} \\lambda(1,p) \\lambda(1,p^{j-i})\n= |\\lambda(1,p)|^2 \\lambda(1,p^{j-i})$.\n \\medskip\n\nFinally we have for $1 \\leq j \\leq r$\n\n \\begin{align*}\n \\langle g_j,g_j \\rangle& = p^{jk} \\langle \\tilde{f}|_k V_{p^j}-\\frac{\\overline{\\lambda(1,q)}}{p^k}\n \\tilde{f} |_kV_{p^{j-1}}, \\tilde{f}|_k V_{p^j} - \\frac{\\lambda(1,p)}{p^k} \\tilde{f}|_k V_{p^j-1} \\rangle\\\\\n & = p^{jk}(p^{-jk}+ \\frac{p^{-(j-1)k}}{p^{2k}} |\\lambda(1,p)|^2-\\frac{\\overline{\\lambda(1,p)} p^{-jk}}{p^k}\n \\lambda(1,p) -\\frac{\\lambda(1,p)}{p^k} p^{-jk} \\overline{\\lambda(1,p)}) \\\\\n & = (1- \\frac{|\\lambda(1,p)|^2}{p^k})\n \\end{align*}\nb) Consider now the case $p \\nmid N$. From Theorem \\ref{gram_matrix_theorem} we have for $0 \\leq i < j \\leq r$\n \\begin{equation*}\n \\langle \\tilde{f}|_k V_{p^i},\\tilde{f}|_k V_{p^j} \\rangle = \\frac{\\lambda(1,p^{j-i})}{p^{jk}(1+\\frac{1}{p})}\n\\end{equation*}\nand \n \\begin{equation*}\n\\langle \\tilde{f}|_k V_{p^j}, \\tilde{f}|_k V_{p^j}\\rangle = \\frac{1}{p^{jk}}.\n \\end{equation*}\nFrom \\cite[Lemma 4.5.7]{Miy} we have $\\lambda(1,p^2) = \\lambda(1,p)^2-(p+1)p^{k-2}\\chi(p)$ and \n$\\lambda(1,p^j) = \\lambda(1,p)\\lambda(1,p^{j-1})-p^{k-1}\\chi(p) T(1,p^{j-2})$ for $j \\geq 3$.\n\\medskip\n\nThis gives us first\n\n \\begin{align*}\n \\langle g_0,g_1 \\rangle &= p^{k\/2} \\langle \\tilde{f},\\tilde{f}|_k V_p \\rangle - \\frac{\\lambda(1,p)}\n {p^{k\/2}(1+\\frac{1}{p})}\\\\\n&=0\\,.\n \\end{align*}\nFor $i \\geq 1$ we get\n\n \\begin{align*}\np^{-(i+1)k\/2} \\langle \\tilde{f}|_k V_{p^i},g_{i+1} \\rangle =& \\langle \\tilde{f}|_k V_{p^i},\\tilde{f}|_k V_{p^{i+1}} \\rangle\n - \\frac{-\\lambda(1,p)} {\\langle \\tilde{f}|_k V_{p^i},\\tilde{f}|_k V_{p^i} \\rangle} \\\\ \n & \\quad +\\frac{\\chi(p)}{p^{k+1}} \\langle \\tilde{f}|_k V_{p^i},\\tilde{f}|_k V_{p^{i-1}} \\rangle \\\\\n =& \\frac{\\lambda(1,p)}{p^{(i+1)k}(1+{\\frac{1}{p}})} -\\frac{\\lambda(1,p)}{p^k} \\frac{1}{p^{ik}} +\n \\frac{\\chi(p)}{p^{k+1}} \\frac{\\overline{\\lambda(1+p)}}{p^{ik}(1+\\frac{1}{p})}\\\\\n =& 0\n\\end{align*}\n(using $\\chi(p) \\overline{\\lambda(1,p)}=\\lambda(1,p)$, see \\cite[Theorem 4.5.4]{Miy}).\n\n\\medskip\nFor $0 \\leq i < j \\leq r$ with $j \\geq 2+i$ we obtain\n\n \\begin{align*}\np^{-jk\/2} \\langle \\tilde{f}|_k V_{p^i},g_j \\rangle = &\\langle \\tilde{f}|_k V_{p^i}, \\tilde{f}|_k V_{p^j} \\rangle \n -\\frac{\\lambda(1,p)}{p^k} \\langle \\tilde{f}|_k V_{p^i}, \\tilde{f}|_k V_{p^{j-1}} \\rangle \\\\\n &\\quad + \\frac{\\chi(p)}{p^{k+1}} \\langle \\tilde{f}|_k V_{p^i},\\tilde{f}|_k V_{p^{j-2}} \\rangle \\\\\n =& \\frac{\\lambda(1,p^{j-i})}{p^{jk}(1+\\frac{1}{p})} - \\frac{\\lambda(1,p)\\lambda(1,p^{j-i-1})}\n {p^k p^{(j-1)k}(1+\\frac{1}{p})}\n + \\frac{\\chi(p) \\lambda(1,p^{j-i-2})}{p^{(j-2)k}(1+\\frac{1}{p})}\\\\\n =&0 \\,.\n \\end{align*}\nTaken together we see that the $g_i$ form an orthogonal basis, it remains to compute the $\\langle g_i,g_i\\rangle$.\\\\\nFor this, $\\langle g_0,g_0\\rangle = 1$ is clear.\n\n \\medskip\nNext, we have \n\n \\begin{align*}\n\\langle g_1,g_1\\rangle &= \\langle g_1,p^{k\/2} \\tilde{f}|_k V_p \\rangle \\\\\n&= p^k \\langle \\tilde{f}|_k V_p,\\tilde{f}|_k V_p\\rangle -p^{k\/2} \\cdot\n \\frac{\\overline{\\lambda(1,p)}}{p^{k\/2}(1+\\frac{1}{p})} \\langle \\tilde{f},\\tilde{f}|_k V_p \\rangle\\\\\n&= 1-\\frac{\\overline{\\lambda(1,p)}}{(1+\\frac{1}{p})} \\cdot \\frac{\\lambda(1,p)}{p^k}\\\\\n&= 1-\\frac{|\\lambda(1,p)|^2}{p^k(1+\\frac{1}{p})}.\n \\end{align*}\nFor $j \\geq 2 $ we see\n\n \\begin{align*}\n\\langle g_j,g_j \\rangle =& \\langle g_j,p^{jk\/2} \\tilde{f}|_k V_{p^j} \\rangle \\\\\n =&p^{jk} \\langle \\tilde{f}|_k V_{p^j}, \\tilde{f}|_k V_{p^j} \\rangle \n - \\frac{\\overline{\\lambda(1,p)}}{p^k} p^{jk} \\langle \\tilde{f}|_k V_{p^{j-1}},\\tilde{f}|_k V_{p^j} \\rangle \\\\\n &+ \\quad \\frac{\\overline{\\chi(p)}p^{jk}} {\\langle \\tilde{f}|_k V_{p^{j-2}},\\tilde{f}|_k V_{p^j} \\rangle} \\\\\n =& 1-\\frac{|\\lambda(1,p)|^2}{p^k(1+\\frac{1}{p})} + \\frac{\\overline{\\chi(p)} \\lambda(1,p^2)}\n {p^{k+1}(1+\\frac{1}{p})}\n\n\n\n\n\n \\end{align*}\nUsing again $\\lambda(1,p^2) = \\lambda(1,p)^2-(p+1)p^{k-2}\\chi(p)$ and\n$\\chi(p) \\overline{\\lambda(1,p)}=\\lambda(1,p)$ we obtain the assertion.\n\\end{proof}\n\\section{Half integral weights}\nFor positive integers $\\kappa,N$ we denote by $M_{k}(4N, \\chi)$\nthe space of holomorphic modular forms of weight $k=\\kappa+\\frac{1}{2}$ and\ncharacter $\\chi$ for the group $\\Gamma_0(4N)$. For the relevant\ndefinitions and notations see \\cite{shimura_halfintegral}.\nIn particular, we denote by ${\\mathfrak G}$ the covering group of\n$GL_2^+({\\mathbb R})$ defined there and by $\\gamma \\mapsto \\gamma^*$ the\nembedding of $\\Gamma_0(4)$ into ${\\mathfrak G}$ with image\n$\\Delta_0(4)$. We can extend this embedding by putting $\\bigl(\n\\begin{smallmatrix}\n 1&0\\\\0&m^2\n\\end{smallmatrix}\\bigr)^*=\\bigl(\\bigl(\n\\begin{smallmatrix}\n 1&0\\\\0&m^2\n\\end{smallmatrix}\\bigr), m^{\\frac{1}{2}}\\bigr)$ and \n $\\bigl(\n\\begin{smallmatrix}\n m^2&0\\\\0&1\n\\end{smallmatrix}\\bigr)^*=\\bigl(\\bigl(\n\\begin{smallmatrix}\n m^2&0\\\\0&1\n\\end{smallmatrix}\\bigr), m^{-\\frac{1}{2}}\\bigr)$ and $(\\gamma_1 \\alpha\n \\gamma_2)^*=\\gamma_1^* \\alpha^* \\gamma_2^*$ for $\\gamma_1,\\gamma_2\n \\in \\Gamma_0(4)$ and $\\alpha$ one of the above matrices of\n determinant $m^2$ . In the sequel we will omit the superscript $*$\n if this can cause no confusion.\n\nWe also use the\naction of double cosets of integral matrices of non zero square\ndeterminant on half integral weight modular forms of level\n$4N$ as defined there. In particular we\nhave associated to the double coset with respect to $\\Delta_0(4N)$ of $\\bigl(\\bigl(\n\\begin{smallmatrix}\n 1&0\\\\0&m^2\n\\end{smallmatrix}\\bigr),m^{\\frac{1}{2}} \\bigr)$\nthe Hecke\noperators $T_{4N}(1,m^2)$ which for $m\\mid 4N$ coincide with the the operators $U(m^2) $\nsending $\\sum_n a_f(n)e(nz)$ to $\\sum_n a_f(nm^2)e(nz)$. By\nconsidering a modular form of level $4N$ as a form of level\n${\\rm lcm}(m,4N)$ we can let $U(m^2)$ act on forms of any level divisible\nby $4$. \nThe operator $T^*_{4N}(m^2,1)$ associated to the double coset of $\\bigl(\n\\begin{smallmatrix}\n m^2&0\\\\0&1\n\\end{smallmatrix}\\bigr)^*$ is adjoint to $T_{4N}(1,m^2)$ with respect\nto the Petersson product and coincides with it if one has\n$\\gcd(m,4N)=1$, we write then as usual $T_{4N}(m^2)$.\nFor $N$ dividing $M$ we have as in the integral weight case a trace\noperator ${\\rm tr}^M_N$ from $M_{k}(4M, \\chi)$ to $M_{k}(4N, \\chi)$\nsending cusp forms to cusp forms and satisfying for cusp forms $f,g$\n\\begin{equation*} \n \\langle f,g \\rangle = \\langle f,g~|_k ~{\\rm tr}_N^M \\rangle,\n \\end{equation*}\nwhere the Petersson product on the left hand side is with respect to\n$\\Gamma_0(M)$ and that on the right hand side is with respect to $\\Gamma_0(N)$.\n \n\nIn the theory of half integral weight modular forms there are two\ndifferent methods used for the definition of oldforms, namely using\nthe operator $V_{d^2}$ as in the integral weight case (but with square\ndeterminant), raising the level by a factor $d^2$, and using the\noperator $U(p^2)$ for a prime not dividing the level, raising the\nlevel by a factor $p$.\nWe start with the first method.\n\\begin{proposition}\n\\label{trace-hecke_halfintegral} \nLet $k=\\kappa+\\frac{1}{2}$ be half integral,\nlet $f \\in S_k(\\Gamma_0(N),\\chi)$ and $d \\in \\mathbb N$. Then\n \\begin{equation*} \n (\\Gamma_0(N):\\Gamma_0(Nd^2))(f|_k V_{d^2})~|_k~{\\rm tr}_N^{Nd^2} = \\frac{1}{d^{2(k-1)}} f|_k T_N^{\\ast}(d^2,1).\n \\end{equation*} \nIn particular, if $p$ is a prime with $p\\nmid 4N$ and $f$ is an\neigenform of the Hecke operator $T(p^2)$ with eigenvalue $\\lambda_p$,\nwe have \n \\begin{equation*} \n (p^2+p)(f|_k V_{p^2})~|_k~{\\rm tr}_N^{Np^2} =\n \\frac{ \\lambda_p}{p^{2(k-1)}} f\n \\end{equation*} \nand \n\\begin{equation*}\n \\langle f, f|_k V_{p^2} \\rangle =\\frac{\n \\lambda_p}{(p^2+p)p^{2(k-1)}}\\langle f,f\\rangle.\n\\end{equation*}\n\n\\end{proposition}\n\\begin{proof}\n This is proven in the same way as Lemma \\ref{trace-hecke}. Notice\n that in the case of half integral weight we can only use shift\n operators $V_{d^2}$ and Hecke operators $T_N^{\\ast}(d^2,1)$ with\n squares $d^2$.\n\\end{proof}\n\\begin{proposition}\n\\label{trace-hecke_halfintegral} \nLet $k=\\kappa+\\frac{1}{2}$ be half integral,\nlet $f \\in S_k(\\Gamma_0(N),\\chi)$ and $p\\nmid 4N$ be a prime.\n\nThen \n\\begin{equation*}\n f \\mid_k U(p^2)|_k {\\rm tr}^{Np}_N=p^2 f|_kT(p^2).\n\\end{equation*}\nIn particular, if $f$ is an\neigenform of the Hecke operator $T(p^2)$ with eigenvalue $\\lambda_p$,\nwe have \n \\begin{equation*}\n\\langle f,f|U(p^2) \\rangle =p^2 \\lambda_p \\langle f, f\\rangle.\n \\end{equation*} \n\\end{proposition}\n\\begin{proof}\n With $\\alpha_b=\\bigl(\n \\begin{smallmatrix}\n 1&b\\\\0&p^2\n \\end{smallmatrix}\\bigr)$ we have (see \\cite{shimura_halfintegral})\n \\begin{equation*}\n f|_kU(p^2)=f|_k\\Gamma_0(4N)\\alpha_0\\Gamma_0(4Np)=(p^2)^{\\frac{k}{2}-1}\\sum_{b=0}^{p^2-1}f|_k\\alpha_b^*.\n \\end{equation*}\nMoreover, we have\n$\\Gamma_0(4N)\\alpha_0\\Gamma_0(4Np)=\\cup_b\\Gamma_0(4N)\\alpha_b$, and by\nSection 3.1 of \\cite{shimura_book},\n$\\Gamma_0(4N)\\alpha_0\\Gamma_0(4Np)\\Gamma_0(4Np)1_2\\Gamma_0(4N)=(p+1)p^2\\Gamma_0(4N)\\alpha_0\\Gamma_0(4N)$.\nFrom this the first assertion follows, and the second one follows in\nthe same way as in the integral weight case, using Lemma\n\\ref{trace-skp}, which is valid for half integral weight too. \n\\end{proof}\nAs mentioned in the introduction, because of the lack of a\nsatisfactory theory of oldforms and newforms in the half integral\nweight case we finish the investigation of this case here without\ntrying to find good orthogonal bases for the space of all cusp forms.\n \\section{Fourier coefficients of cusp forms}\nFor the rest of this paper we concentrate again on the case of modular\nforms of integral weight $k$. \n \\begin{theorem}\nThe space $S_k(\\Gamma_0(M),\\chi)$ has an orthonormal basis $(h_1,\\ldots,h_d)$, where each \n$h_i$ is an eigenform of all Hecke operators $T(p)$ for $p\\nmid M$ and where the Fourier coefficients\n$a(h_i,n)$ satisfy\n \\begin{equation*}\n|a(h_i,n)| \\leq 2 \\sqrt{\\pi} e^{2\\pi}\\sigma_0(n) n^{\\frac{k-1}{2}} \\cdot M^{\\frac{1}{2}}\\cdot \\prod_{p|M} \\frac{(1+\\frac{1}{p})^3}{\\sqrt{1-\\frac{1}{p^4}}}.\n \\end{equation*}\n \\end{theorem}\n \n \\begin{proof}\nWe write $g_j = \\phi_{p,j}(\\tilde{f})$ for the basis vectors $g_j \\in W_p(f)$ constructed in Theorem\n\\ref{ogbasis_prime} and view $\\phi_{p,j}$ as an operator transforming\na modular form $g$ into the expression on the right hand side (with\n$g$ in place of $\\tilde{f}$) of the definition of $g_j$. Obviously,\nthese operators commute. As noticed\nafter Lemma \\ref{product_decomposition-Lemma} the space\n$S_k(\\Gamma_0(M),\\chi)$ \nhas then an orthogonal basis consisting of the $(\\prod_{p|M} \\phi_{p,j_p})(\\tilde{f})$, where $f$ runs over the primitive\nforms of levels $N_f\\mid M$ in $S_k(\\Gamma_0(M),\\chi)$ and $j_p \\geq\n0$ over the integers satisfying $N_fp^{j_p}\\mid M$.\n \\medskip\n\nExamining the Proof of Theorem \\ref{ogbasis_prime} we see that the Petersson norm of $(\\prod_{i} \\phi_{p_i,j_{p_i}})\n(\\tilde{f})$ is equal to the product over $i$ of the norms of the $\\phi_{p_i,j_{p_i}}(\\tilde{f})$, which were computed in that \ntheorem.\n \\medskip\n\nAnalogously, we can decompose the computation of a bound for the Fourier coefficients of $(\\prod_{i} \\phi_{p_i,j_{p_i}})\n(\\tilde{f})$ into the computation of such a bound for each $\\phi_{p_i,j_{p_i}}(\\tilde{f})$. Looking at the $g_j$\nagain, we have for $p|N_f$ (using $|a(f,n)| \\leq\n\\sigma_0(n)n^{\\frac{(k-1)}{2}}$ and $\\vert \\lambda(1,p)\\vert \\le\np^{\\frac{k-1}{2}}$ for primitive forms $f$ and $p\\mid N_f$) \n \\begin{align*}\n \\langle f,f \\rangle^{\\frac{1}{2}} |a(g_0,n)| \\leq &\\sigma_0(n) n^{\\frac{k-1}{2}} \\mbox{ and}\\\\\n \\langle f,f \\rangle^{\\frac{1}{2}} |a(g_j,n)| \\leq & p^{\\frac{jk}{2}} \\sigma_0(\\frac{n}{p^j})(\\frac{n}{p^j})^{\\frac{k-1}{2}}\\\\\n & +p^{\\frac{jk}{2}} p^{-\\frac{(k+1)}{2}} \\sigma_0(\\frac{n}{p^{j-1}})(\\frac{n}{p^{j-1}})^{\\frac{k-1}{2}}\n \\end{align*}\nfor $j \\geq 1$, where the terms involving\n$\\frac{n}{p^j},\\frac{n}{p^{j-1}}$ appear only if the respective\nquotient is integral. \nThis gives $\\langle f,f \\rangle^{\\frac{1}{2}} |a(g_j,n)| \\leq\n\\sigma_0(n) n^{\\frac{k-1}{2}} p^{\\frac{j}{2}} (1+\\frac{1}{p}) $ for\n$j\\ge 1$, and we see that this estimate holds indeed for all $j$.\n\\medskip\n\nFor $p\\nmid N$ we obtain (with $|\\lambda(1,p)| \\leq\n2p^{\\frac{k-1}{2}}$ for $p\\nmid N_f$):\n \\begin{align*}\n\\langle f,f \\rangle^{\\frac{1}{2}} |a(g_0,n)| \\leq & \\sigma_0(n) n^{\\frac{k-1}{2}}\\\\\n\\langle f,f \\rangle^{\\frac{1}{2}} |a(g_1,n)| \\leq & p^{\\frac{k}{2}} \\sigma_0(\\frac{n}{p})(\\frac{n}{p})^{\\frac{k-1}{2}}\\\\\n & + 2\\sigma_0(n)n^{\\frac{k-1}{2}} \\cdot \\frac{p^{\\frac{k-1}{2}}}{p^{\\frac{k}{2}}(1+\\frac{1}{p})}\\\\\n \\leq & \\sigma_0(n) n^{\\frac{k-1}{2}} p^{\\frac{1}{2}}(1+\\frac{2}{p(1+\\frac{1}{p})})\n \\end{align*}\nand for $ j \\geq 2$\n \\begin{align*}\n \\langle f,f\\rangle^{\\frac{1}{2}} |a(g_j,n)| \\leq & p^{\\frac{jk}{2}} (\\sigma_0(\\frac{n}{p^j})(\\frac{n}{p^j})^{\\frac{k-1}{2}} +\n 2 \\cdot \\frac{p^{\\frac{k-1}{2}}}{p^k} \\cdot \\sigma_0(\\frac{n}{p^{j-1}})(\\frac{n}{p^{j-1}})^{\\frac{k-1}{2}}\\\\\n & +\\frac{1}{p^{k+1}} \\sigma_0( \\frac{n}{p^{j-2}})(\\frac{n}{p^{j-2}})^{\\frac{k-1}{2}})\\\\\n \\leq & \\sigma_0(n) n^{\\frac{k-1}{2}} p^{\\frac{j}{2}} (1+\\frac{1}{p})^2,\n \\end{align*}\nand we see that the latter bound holds for all $j$.\n \\medskip\n\nFinally, to estimate $\\langle f,f \\rangle $ for the primitive form\n$f$ from below we choose the fundamental domain\n${\\mathcal F}$ so that it contains $\\{x+iy \\in H\\mid \\vert x \\vert\n<\\frac{1}{2}, y>1\\}$, use $a(f,1)=1$ and get as in \n\\cite{fomenko1} \n \\begin{equation*}\n \\langle f,f \\rangle \\geq ( 4\\pi e^{4\\pi}N_f \\cdot \\prod_{p|N_f} (1+\\frac{1}{p}))^{-1} \n \\end{equation*}\nfrom the trivial bound \n$\\int_{\\mathcal F}\\vert f(x+iy)\\vert^2 y^{k-2}dxdy\\ge \\int_1^\\infty\n\\exp(-4\\pi y)dy$.\n\nImprovements on this are possible by \\cite{Go-Ho-Li, Ho-Lo} but have been made effective so\nfar only in few cases, see \\cite{rouse}. \nAt least if the conductor $M_\\chi$ of the character $\\chi$ is small\ncompared to $M$\nthese don't give much for our present purpose because of the \nadditional factors coming from oldforms which we computed above. \n\n\\medskip\nPutting things together and comparing the bounds in the cases $p\\mid N$ and\n$p \\nmid N$ , we arrive for $h$ equal to the quotient of one of the\n$\\prod_{p|M} \\phi_{p,j_p}(\\tilde{f})$ by its Petersson norm at the\ncommon bound\n \\begin{equation*}\n |a(h,n)| \\leq 2\\sqrt{\\pi} e^{2 \\pi} \\sigma_0(n) n^{\\frac{k-1}{2}} M^{\\frac{1}{2}} \\prod_{p|M} \\frac{(1+\\frac{1}{p})^3}\n {\\sqrt{1-\\frac{1}{p^4}}}\n \\end{equation*}\nfor both cases as asserted.\n \\end{proof}\n\n \\begin{theorem}\\label{fourier_estimate}\nLet $F\\in S_k(\\Gamma_0(M),\\chi)$. Then the Fourier coefficients $a(F,n)$ satisfy\n \\begin{equation*}\n |a(F,n)| \\leq 2\\sqrt{\\pi} e^{2 \\pi}\\sqrt{\\langle F,F \\rangle} \\cdot (\\dim S_k(\\Gamma_0(M),\\chi))^{\\frac{1}{2}} \\cdot \\sigma_0(n) n^{\\frac{k-1}{2}}\n M^{\\frac{1}{2}} \\cdot \\prod_{p|M} \\frac{(1+\\frac{1}{p})^3}{\\sqrt{1-\\frac{1}{p^4}}}.\n \\end{equation*}\n \\end{theorem}\n\n \\begin{proof}\nThis follows immediately from the previous theorem, using the Cauchy-Schwarz inequality.\n \\end{proof}\n\n \\begin{remark}\n \\begin{enumerate}\n\\item As indicated above it should be possible to improve on the factor\n $M^{\\frac{1}{2}}$ in the bound for $a(F,n)$ if the conductor $M_\\chi$ of the character $\\chi$ is equal to\n $M$ or at least relatively\n large compared to $M$ by using an effective\n version of the bound for the Petersson norm of a primitive form from \\cite{Go-Ho-Li, Ho-Lo} .\n \\item \nFor $\\gcd(n,M) = 1$ we obtain the better estimate\n \\begin{equation*}\n |a(h_i,n)| \\leq 2\\sqrt{\\pi} e^{2 \\pi}\\sigma_0(n) n^{\\frac{k-1}{2}} \\cdot M^{\\frac{1}{2}} \\prod_{p|M} \\frac{(1+\\frac{1}{p})}\n {\\sqrt{1-\\frac{1}{p^4}}}\n \\end{equation*}\nin Theorem \\ref{fourier_estimate} and hence\n \\begin{equation*}\n|a(F,n)| \\leq 2\\sqrt{\\pi} e^{2 \\pi}\\sqrt{\\langle F,F \\rangle} \\cdot (\\dim S_k(\\Gamma_0(M),\\chi))^{\\frac{1}{2}} \\cdot \\sigma_0(n) n^{\\frac{k-1}{2}}\n M^{\\frac{1}{2}} \\cdot \\prod_{p|M} \\frac{(1+\\frac{1}{p})}{\\sqrt{1-\\frac{1}{p^4}}}.\n \\end{equation*}\n \\end{enumerate}\n \\end{remark}\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdgdz b/data_all_eng_slimpj/shuffled/split2/finalzzdgdz new file mode 100644 index 0000000000000000000000000000000000000000..b3b6182589cfafce89330618bf87d21c1d345c64 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdgdz @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThis paper is part of a study of certain $C^*$-algebras which can be associated to a\nhyperbolic homeomorphism of a compact space, $(X,f)$. They are called the the stable and unstable\nRuelle algebras, $\\mathcal{R}^s$ and $\\mathcal{R}^u$, and are higher dimensional generalizations of\nCuntz-Krieger algebras. This means that if the dimension of $X$ is\nzero, then $\\mathcal{R}^u \\cong O_{A^T} \\otimes \\mathcal{K}$ and $\\mathcal{R}^s \\cong O_A \\otimes \\mathcal{K}$. One of the basic\nresults of the theory is a duality relation between $\\mathcal{R}^u$ and $\\mathcal{R}^s$.\nIn the present paper we prove this explicitely in the zero dimensional\ncase. Our reason for doing this is to bring out the use of Fock space to construct\nthe K-theory class implementing the duality in the zero dimensional case. \n\n\n\n\n\nThe notion of Spanier-Whitehead duality in topology has a very natural\ngeneralization to K-theory of $C^*$-algebras. Briefly, it says that\ntwo algebras, $A$ and $B$, are dual if there are duality classes\n$\\Delta \\in KK^i(A \\otimes B, \\mathbb{C})$ and $\\delta \\in KK^i(\\mathbb{C}, A \\otimes \\mathcal{B})$ which\ninduce an isomorphism between the K-theory of $A$ and the K-homology of\n$B$ via Kasparov product. It is closely related to the notion of\nPoincar\\'e duality used by Connes in his study of the standard model\nof particle physics, ~\\cite{connes:book} . We will describe it\nin more detail in Section 2. A useful result proved in the paper\nis a criterion, presented in Section 3, for deciding when one has a duality\nbetween two algebras. It is applicable when two duality classes such\nas $\\Delta$ and $\\delta$ are given and one wants to show that they induce \nduality isomorphisms. Section 4 and Section 5 apply this criterion to\nthe case of two algebras associated to a hyperbolic dynamical system.\nIf $A$ is an $n \\times n$ aperiodic matrix then one can associate to it\nthe subshift of finite type, $(\\Sigma_A, \\sigma_A)$. There are two\n$C^*$-algebras that can be constructed from this data--the\nCuntz-Krieger algebras $O_A$ and $O_{A^T}$. We show that these algebras\nare dual. In Section 6 we will discuss some further applications and\nmake some concluding remarks. \n\nIn a later paper we will establish duality for the stable and\nunstable Ruelle algebras associated to any hyperbolic homeomorphism of a\ncompact space, (a Smale space). Ruelle algebras were introduced by\nthe second author in ~\\cite{ putnam}. They can be thought of as\nhigher dimensional generalizations of Cuntz-Krieger algebras. They\nare constructed by defining two equivalence relations on the Smale\nspace, stable and unstable equivalence. One takes the $C^*$-algebras\nassociated to them and then takes the crossed products by the\nautomorphism induced by the homeomorphism.\n\n\nThe stable and unstable equivalence classes in a Smale space behave\nvery much like transverse foliations. Because of that, and the fact\nthat the homeomorphism is contracting along the stable leaves, one\nobtains a duality in K-theory for the algebras.\n\nCuntz-Krieger algebras are special cases of Ruelle algebras, so the\nduality established here would follow from the more general theory.\nHowever, there is an intriguing aspect to this which as yet has no\nanalogue in the general case. Namely, the duality classes have\nrepresentatives constructed using Fock space. These classes are obtained\nin a natural manner following work of D. Evans, ~\\cite{ evans} and\nD. Voiculescu, ~\\cite{ voiculescu}. This provides potential\nconnections with physics (c.f. ~\\cite{ jorgensen, dykema-n, faddeev}) and Voiculescu's work on free products which we\nhope to pursue in the future. We would like to thank Dan Voiculescu\nand Marius Dadarlat for very helpful conversations.\n\nIt should be noted that the general duality theory for Smale spaces\nrequires a different approach which is based on the notion of\nasymptotic morphism. The final version of the duality theorem uses\nthese methods, ~\\cite{ kaminker-p2}.\n\n\\section{K-theoretic duality for $C^*$-algebras}\n\nIn this paper we will be describing an example of some $C^*$-algebras\nthat are dual with respect to K-theory. This notion of duality has\nappeared several times in the past, ~\\cite{kasparov:invent,kahn-k-s,parker} and\nrecently was used by Connes ~\\cite{connes:book}. We present the\ndefinitions here and list some basic facts. More details can be found\nin ~\\cite{ kaminker-p1}. \n\nWe will use the following conventions. Let $\\mathcal{S}$ denote $C_0(\\mathbb{R})$.\nThen $KK^1(A,B)$ will be, by definition, equal to $KK(\\mathcal{S} \\otimes A, B)$.\nFor $A$ and $B$ separable, and $A$ nuclear one has that $KK^1(A,B)\n\\cong Ext(A,B)$.\nWe establish some additional notation. If $\\sigma$ is a permutation, and $A_1, \\ldots\nA_n$ are algebras, then we will also use $\\sigma$\n to denote the isomorphism\n\\begin{equation*} \nA_1 \\otimes \\cdots \\otimes A_n \\to A_{\\sigma (1)} \\otimes \\cdots \\otimes A_{\\sigma(n)}.\n\\end{equation*} \nIf $\\sigma$ is a transposition interchanging $i$ and $j$, we will write\n$\\sigma_{ij}\\colon KK^*( \\cdots \\otimes A_i \\otimes \\cdots \\otimes A_j \\otimes\\cdots , B) \\to\nKK^*(\\cdots \\otimes A_j \\otimes \\cdots \\otimes A_i \\otimes \\cdots, B) $ for the\nhomomorphism induced by $\\sigma$ on the first variable of the Kasparov groups, \nand $\\sigma^{ij}$ for the corresponding map induced on the second variable. Let $\\tau_D \\colon KK^i(A,B) \\to KK^i(A \\otimes D, B \\otimes D)$ and\n$\\tau^D \\colon KK^i(A,B) \\to KK^i(D \\otimes A, D \\otimes B)$ denote the standard\nmaps, ~\\cite{ kasp}. \n\nAlso, we will have need of the following version of Bott\nperiodicity. Let \n\\begin{equation}\n\\label{toep} \n\\begin{CD}\n0 @>>> \\mathcal{K}(\\ell^2(\\mathbb{N})) @>>> \\mathcal{T} @>{\\sigma_{\\mathcal{T}}}>> C(S^1) @>>> 0\n\\end{CD}\n\\end{equation}\nbe the Toeplitz extension. We will denote it by $\\mathcal{T} \\in KK^1(C(S^1),\n\\mathbb{C}) $ and its restriction to $\\mathcal{S}$ by $\\mathcal{T}_0 \\in KK(\\mathcal{S} \\otimes \\mathcal{S}, \\mathbb{C})$.\nLet $\\beta \\in KK(\\mathbb{C}, \\mathcal{S} \\otimes \\mathcal{S})$ be the Bott element.\nThen the following holds. (c.f. ~\\cite{ blackadar}) \n\\begin{theorem} One has\n$\\beta \\otimes_{\\mathcal{S} \\otimes \\mathcal{S}} \\mathcal{T}_0 = 1_{\\mathbb{C}}$ and $\\mathcal{T}_0\n\\otimes \\beta = 1_{\\mathcal{S} \\otimes \\mathcal{S}}$, .\n\\end{theorem}\n\nWe describe next the notion of duality we will be using.\n\\begin{definition}\nLet $A$ and $B$ be $C^{*}$-algebras. Suppose that, for $n\n= 0$ or $1$, two classes,\n$\\Delta \\in KK^n(A \\otimes B, \\mathbb{C})$ and $\\delta \\in KK^n(\\mathbb{C}, A \\otimes B)$ , are given.\nDefine homomorphisms $\\Delta_i \\colon K_i(A) \\to\nK^{i+n}(B)$ and $\\delta_i \\colon K^{i+n}(B) \\to K_i(A)$ in the following way.\nIn $n = 1$ set\n\\begin{equation} \n \\Delta_i(x) = \n\\begin{cases}\nx \\otimes_{A}\\Delta& \\text{if $ i = 0$},\\\\\n\\beta \\otimes_{\\mathcal{S} \\otimes \\mathcal{S}} ( \\sigma_{12}(x \\otimes_{A} \\Delta))& \\text{if $i = 1$} \n\\end{cases}\n\\end{equation}\nand let\n\\begin{equation} \n\\delta_i(y) =\n\\begin{cases}\n\\beta \\otimes_{\\mathcal{S} \\otimes \\mathcal{S}} (\\delta \\otimes_{B} y)& \\text{if $i = 0$}, \\\\\n\\delta \\otimes_{B} y& \\text{if $ i = 1$},\n\\end{cases}\n\\end{equation}\n\nIf $n = 0$ set\n\\begin{equation} \n \\Delta_i(x) = \n\\begin{cases}\nx \\otimes_{A}\\Delta& \\text{if $ i = 0$},\\\\\n \\sigma_{12}(x \\otimes_{A} \\Delta)& \\text{if $i = 1$} \n\\end{cases}\n\\end{equation}\nand let\n\\begin{equation} \n\\delta_i(y) =\n\\begin{cases}\n\\delta \\otimes_{B} y& \\text{if $i = 0$}, \\\\\n\\beta \\otimes_{\\mathcal{S} \\otimes \\mathcal{S}} (\\delta \\otimes_{B} y)& \\text{if $ i = 1$},\n\\end{cases}\n\\end{equation}\n\nWe say that $A$ and $B$ are\ndual if \n\\begin{equation*} \n\\Delta_i :K_i(A) \\to K^{i+n}(B)\n\\end{equation*}\nand\n\\begin{equation*} \n\\delta_i :K^{i+n}(B) \\to K_i(A)\n\\end{equation*}\nare inverse isomorphisms. Given $A$, if such an algebra $B$ exists it is called a dual of $A$\nand it is denoted $\\EuScript{D} A$. \n\\end{definition}\nIn this generality a dual is not unique, so care must taken with the\nnotation $\\EuScript{D} A$. We will only use it if a specific dual is in hand.\nHowever, it is easy to see that a dual is unique up to\nKK-equivalence. Indeed, $\\sigma^{12}(\\delta ') \\otimes_{A} \\Delta \\in\nKK(B',B)$ and $\\sigma^{12}(\\delta) \\otimes_{A} \\Delta ' \\in KK(B,B')$ yield\nthe required KK-equivalence.\n\nThe form of the definition of the homomorphisms $\\Delta_{*}$ and\n$\\delta_{*}$ is forced by our convention that $KK^1(A,B) = KK(A \\otimes\n\\mathcal{S}, B)$. It is an interesting point that when dealing with an odd\ntype duality one must bring in some form of Bott periodicity\nexplicitely. Either one can incorporate it into the definition of the\nhomomorphisms as we have done, or one can modify the definitions of\nthe K-theory groups. As the reader will see, our choice is the most convenient one for\nthe proofs we are giving. Note also that we are working only in the\nodd case, (i.e. $n=1$), in this paper.\n\n\n\nFor a specific algebra $A$ it is not clear\nwhether a dual, $\\EuScript{D} A$, exists. In general, the existence of $\\EuScript{D} A$\nwith prescribed properties, such as separability, is a strong\ncondition. If one can take $\\EuScript{D} A$ equal to $A$ then this\nagrees with what Connes has developed as Poincar\\'e duality in\n~\\cite{connes:book}. \n\n\nIf one requires only the existence of $\\Delta$ and the fact that it\nyields an isomorphism in the definition above, then there is no\nguarantee that a class $\\delta$ exists to give the inverse\nisomorphism. If $A$ was $C(X)$, with $X$ a finite complex, then the\nexistence of $\\delta$ would follow from that of $\\Delta$. However, in\ngeneral this need not hold.\n\n\nThe origin of this notion is in Spanier-Whitehead duality in topology,\n~\\cite{ spanier}. Recall that if $X$ is a finite complex then there is a dual\ncomplex, $DX$, along with class $\\Delta\\in H_m(X\\wedge\nDX)$ \nsatisfying that\n$\\backslash \\Delta :H^i(X) \\to H_{m-i}(DX)$\nis an isomorphism. The space $DX$ is called the\nSpanier-Whitehead dual of $X$. It is unique up to stable homotopy. If\n$M$ is a closed manifold of dimension $n$ embedded in $\\mathbb{R}^m$, \nthen $DM$ can be taken to be $(\\nu M)^{+}$, the Thom space of \nthe normal bundle of M. It is interesting to note that there is a relation\nbetween Spanier-Whitehead duality, the Thom isomorphism, $\\phi$, \nand Poincar\\'e duality \n\\begin{equation*} \n\\begin{CD}\nH_{n-i}(M) @<<{\\backslash\\Delta}< H^{i+m-n}((\\nu M)^+)\\\\\n@A{\\cap [M]}AA@AA{\\phi}A\\\\\nH^{i}(M)@>>>H^{i}(M),\n\\end{CD}\n\\end{equation*}\nwhere $[M] = U \\backslash \\Delta$, $U$ the Thom class. Of course,\n$\\EuScript{D} (C(X)) = C(D(X))$ for $X$ a finite complex.\n\nIf one works in the class $\\mathcal{N}$\nintroduced by Rosenberg and Schochet in their study of the Universal\nCoefficient Theorem, ~\\cite{ rosenberg-s}, the theory simplifies and\nthere is a strong analogy with the commutative case. (However, in general, the\nrestriction that the algebras lie in $\\mathcal{N}$ is too strong. In several\nimportant examples this does not hold.) Recall that $ \\mathcal{N}$ is defined to be\nthe smallest class of separable, nuclear $C^*$-algebras containing $\\mathbb{C}$ and\nclosed under forming extensions, direct limits, and KK-equivalence.\nThe Universal Coefficient Theorem for KK-theory holds for $KK(A,B)$ if\n$B$ is separable and $A \\in \\mathcal{N}$. Let $\\EuScript{D}\\mathcal{N}$ be the subclass of $\\mathcal{N}$\nconsisting of algebras $A$ in $\\mathcal{N}$ for which a dual $\\EuScript{D} A$ exists and\nis also in $\\mathcal{N}$. For algebras in $\\EuScript{D}\\mathcal{N}$ the following facts are easy\nconsequences of the properties of the Kasparov product and the\nUniversal Coefficient Theorem.\n\n\\begin{enumerate}[i)]\n\\item If $A$ is dual to $B$, then $B$ is dual to $A$.\n\\item If $A \\in \\EuScript{D}\\mathcal{N}$, then $\\EuScript{D}(\\EuScript{D} A)$ is KK-equivalent to $A$.\n\\item If $A \\in \\mathcal{N}$, then $A \\in \\EuScript{D}\\mathcal{N}$ if and only if $K_*(A)$ is\n finitely generated.\n\\item Let $E, D \\in \\mathcal{N}$ and $A \\in \\EuScript{D}\\mathcal{N}$. Then\n\\begin{equation*}\n\\Delta_* \\colon KK^*(E, D \\otimes A) \\to KK^{*+n}(E \\otimes \\EuScript{D} A, D)\n\\end{equation*}\nand \n\\begin{equation*} \n\\delta_* \\colon KK^*(E \\otimes \\EuScript{D} A, D) \\to KK^{*+n}(E, D \\otimes A)\n\\end{equation*}\nare inverse isomorphisms.\n\\item If $A$ has a dual, and $A'$ is KK-equivalent to A, then $A'$ has\n a dual which is KK-equivalent to the dual of $A$.\n\\end{enumerate}\nFor details and further development, see ~\\cite{k-p2}. \nIt is not apparent if an algebra has a dual or not. Indeed, the main\ngoal of this paper is to exhibit an example of a class of algebras\nwith specific types of duals which have a geometric and dynamical\norigin. However, one can start to build up a class of algebras which\nhave duals in an elementary way. For example, if $X$ is a finite\ncomplex, then $\\EuScript{D} X$ exists. If $A \\in \\mathcal{N}$ and $K_*(A)$ is finitely\ngenerated, then $A$ is KK-equivalent to $C(X)$ where $X$ is a finite\ncomplex, and hence $A$ has a dual. Moreover, Connes has shown that\n$\\mathcal{A}_{\\theta}$ is self-dual for $\\theta$ irrational. \n\nThe largest subclass of $\\mathcal{N}$ for which $\\EuScript{D}$ is involutive modulo\nKK-equivalence is $\\EuScript{D}\\mathcal{N}$. This can be compared with a result of M.\nBoardman, ~\\cite{boardman}, which states that the largest category on\nwhich Spanier-Whitehead duality is involutive is the homotopy category\nof finite complexes. Thus, $\\EuScript{D}\\mathcal{N}$ has a formal similarity with the\nhomotopy category of finite complexes. This itself does not clarify\nthe issue of which $C^*$-algebras should play the role of\nnon-commutative finite complexes, but it is suggestive. \nThis will be discussed further in ~\\cite{k-p2}.\n \n \n\nOne may view duality as being of even or odd type depending on whether\n$\\Delta$ belongs to $KK^n(A \\otimes \\EuScript{D} A,\\mathbb{C})$ for $n$ even or odd. We\nwill discuss the odd type of duality here. However, in connections to\nthe Novikov Conjecture, ~\\cite{ kaminker-p1}, and physics, ~\\cite{ connes:book}, the even\ntype naturally appears. \n\n\\section{Criterion for duality classes}\n\nIn this section we will present a technical result, Proposition ~\\ref{hyp}, which gives a criterion for when two classes $\\Delta$ and\n$\\delta$ yield duality isomorphisms. This is essentially the same as\nthe condition given by Connes, ~\\cite[p. 588]{connes:book}, except\nthat our duality is in the odd case and this requires adjusting the\narguments for Bott periodicity. This technicality is actually what\nallows us to obtain the duality isomorphisms in the case of shifts of\nfinite type. \n\nThus, we shall give useable conditions under which $\\Delta_* \\circ \\delta_* = 1$\nand $\\delta_* \\circ \\Delta_* = 1$. This breaks into two parts. The first is an\nuncoupling step and the second is a type of cancellation. In the\nfollowing sections we apply this to the case of Cuntz-Krieger\nalgebras.\n\nWe will first prove in detail that $\\delta_0 \\Delta_0 = 1_{K_0(A)}$.\nThe statement, $\\delta_1 \\Delta_1 = 1_{K_1(A)}$, follows in a similar\nmanner. We then sketch the proof that $ \\Delta_0 \\delta_0 =\n1_{K_0(B)}$. To start with we will perform the uncoupling step. Let $x\n\\in K_0(A) = KK(\\mathbb{C},A)$. Then we have\n\\begin{equation*} \n \\delta_0 \\Delta_0 (x) = \\beta \\otimes _{\\mathcal{S} \\otimes \\mathcal{S}} (\\delta \\otimes_{B}\n (x \\otimes_{A} \\Delta)).\n\\end{equation*}\nConsider, first, the factor $(\\delta \\otimes_{B} (x \\otimes_{A}\n\\Delta))$. We have\n\\begin{align*}\n (\\delta \\otimes_{B} (x \\otimes_{A} \\Delta)) &= \\tau_{\\mathcal{S}} (\\delta) \\otimes (\\tau^{A}\n \\tau_{\\mathcal{S}} \\tau_{B} (x) \\otimes \\tau^{A} (\\Delta)) \\\\ &= (\\tau_{\\mathcal{S}}\n (\\delta) \\otimes (\\tau^{A} \\tau_{\\mathcal{S}} \\tau_{B} (x)) \\otimes \\tau^{A} (\\Delta).\n\\end{align*}\nNow, a direct computation yields that\n\\begin{align*}\n (\\tau_{\\mathcal{S}} (\\delta) \\otimes (\\tau^{A} \\tau_{\\mathcal{S}} \\tau_{B} (x)) \\otimes \\tau^{A}\n (\\Delta) &= (\\tau_{\\mathcal{S}} \\tau^{\\mathcal{S}} (x) \\otimes {\\sigma_{12} \\sigma^{24}} \\tau^{A}\n \\tau^{\\mathcal{S}} (\\delta)) \\otimes \\tau^{A} (\\Delta) \\\\ &= \\tau_{\\mathcal{S}} \\tau^{\\mathcal{S}}\n (x) \\otimes ({\\sigma_{12} \\sigma^{24}} \\tau^{A} \\tau^{\\mathcal{S}} (\\delta)) \\otimes \\tau^{A}\n (\\Delta)).\n\\end{align*}\nPutting $\\beta$ back into the product and simplifying, one obtains\n\\begin{equation*} \n \\beta \\otimes _{\\mathcal{S} \\otimes \\mathcal{S}} (x \\otimes_{A} (\\delta \\otimes_B \\Delta)) = x \\otimes_A\n (\\beta \\otimes _{\\mathcal{S} \\otimes \\mathcal{S}} (\\delta \\otimes_B \\Delta)).\n\\end{equation*}\nThis accomplishes the uncoupling.\n\n\\begin{prop}\nOne has $\\delta_0 \\Delta_0 (x) = x \\otimes_A (\\beta \\otimes _{\\mathcal{S} \\otimes \\mathcal{S}}\n(\\delta \\otimes_B \\Delta))$.\n\\end{prop}\nWhat one would hope is that\n\\begin{equation} \n\\label{hope}\n\\beta \\otimes _{\\mathcal{S} \\otimes \\mathcal{S}} (\\delta \\otimes_B \\Delta) = 1_A \\in KK(A,A),\n\\end{equation}\nthus yielding\n\\begin{equation*}\n\\delta_0 \\Delta_0 (x) = x.\n\\end{equation*}\nIndeed, if $\\delta \\otimes_B \\Delta = \\tau_A ( \\mathcal{T}_0)$, then $\\beta _{\\mathcal{S} \\otimes\n\\mathcal{S}} \\tau^A (\\mathcal{T}_0) = 1_A$ by Bott periodicity. However, this need not be the case. This is because $\\delta$ and\n$\\Delta$ behave like K-theory fundamental classes and may differ by a\nunit from ones which would yield ~\\eqref{hope}. There is a way to\ncompensate for this which we address next.\n\n\\begin{prop}\n\\label{hyp}\nSuppose that there are automorphisms $\\Theta_A \\colon A \\otimes \\mathcal{S} \\to A \\otimes\n\\mathcal{S}$ and $\\Theta_B \\colon B \\otimes \\mathcal{S} \\to B \\otimes \\mathcal{S}$ such that \n\\begin{align*}\n (\\Theta_A )_i \\colon K_i(A \\otimes \\mathcal{S}) \\to K_i(A \\otimes \\mathcal{S}) \\\\ (\\Theta_B )_i\n \\colon K_i(B \\otimes \\mathcal{S}) \\to K_i(B \\otimes \\mathcal{S})\n\\end{align*}\nare the identity map, for $i= 0,1$, and, further,\n\\begin{align}\n\\label{thiii}\n\\sigma_{12}(\\delta \\otimes_B \\sigma_{12}(\\Delta)) &= \\Theta_A \\otimes_{A \\otimes \\mathcal{S}} \\tau^A\n(\\mathcal{T}_0) \\\\\n\\sigma_{12}(\\delta \\otimes_A \\sigma_{12}(\\Delta)) &= \\Theta_B \\otimes_{B \\otimes \\mathcal{S}} \\tau^B\n(\\mathcal{T}_0) \n\\end{align}\nThen,\n\\begin{equation*} \n\\delta_i \\Delta_i \\colon K_i(A) \\to K_i(A)\n\\end{equation*}\nis the identity for $i=0,1$\n\\end{prop}\n\\begin{proof}\nWe will give the proof for $\\delta_0 \\Delta_0$, the other case being\nsimilar.\nCondition ~\\eqref{thiii} states that \n\\begin{equation*} \n\\sigma_{12} \\tau_{\\mathcal{S}} \\tau_{A} (\\delta) \\otimes (\\tau_{A} (\\sigma_{12}(\\Delta))) =\n\\tau^{\\mathcal{S}} (\\Theta_A ) \\otimes \\tau^{A} (\\mathcal{T}_0).\n\\end{equation*}\nThus, one has\n\\begin{equation*} \n\\beta \\otimes _{\\mathcal{S} \\otimes \\mathcal{S}} (\\delta \\otimes_B (\\sigma_{12}(\\Delta))) = \\tau^A (\\beta) \\otimes\n\\tau_{\\mathcal{S}}(\\Theta_A) \\otimes \\tau^A (\\mathcal{T}_0). \n\\end{equation*}\nNow, \n\\begin{align*}\n\\tau^A(\\beta) \\otimes \\tau_{\\mathcal{S}} (\\Theta_A) &= (\\Theta_A)_* (\\tau^A(\\beta)) \\\\\n&= \\tau^A (\\beta),\n\\end{align*}\nso we obtain\n\\begin{align*}\n\\beta \\otimes _{\\mathcal{S} \\otimes \\mathcal{S}} (\\delta \\otimes_B \\sigma_{12}(\\Delta) &= \\tau^A (\\beta) \\otimes\n\\tau^A (\\mathcal{T}_0) \\\\\n&= 1_A,\n\\end{align*}\nwhich yields the desired result.\n\\end{proof}\n\nFor the composition $\\Delta_* \\delta_*$ we have a similar result.\n\\begin{prop}\nUnder the hypothesis of Proposition ~\\ref{hyp}, we have that \n\\begin{equation} \n\\Delta_i \\delta_i \\colon K^{i+1}(B) \\to K^{i+1}(B)\n\\end{equation}\nis the identity for $i=0,1$.\n\\end{prop}\n\\begin{proof}\nThe proof is obtained from the previous one by making obvious changes.\n\\end{proof}\n\nThe other cases follow in the same way. Thus, showing that one has a\nduality between algebras reduces to constructing the maps $\\Theta_A$\nand $\\Theta_B $ satisfying the conditions above. In the next two sections\nwe will do this for the case of the stable and unstable Ruelle\nalgebras associated to a subshift of finite type. \n\n\\section{Construction of duality classes for shifts of finite type}\n\nIn this section we will construct the classes in KK-theory needed to\nexhibit the duality between $O_A \\otimes \\mathcal{K}$ and $O_{A^T} \\otimes \\mathcal{K}$. Let $A$\nbe an $n \\times n$ matrix with entries which are all zero or one. We\nassume that $A$ has no row or column consisting entirely of\nzeros and that the associated shift space is a Cantor set. \n\nThe Cuntz-Krieger algebra, $O_A$, is the universal $C^*$-algebra\ngenerated by partial isometries $s_1, \\dots , s_n$ satisfying \n\\begin{enumerate}[i)]\n\\item the projections $s_1 s_{1}^{*}, \\dots , s_n s_{n}^{*}$ are\npairwise orthogonal and add up to the identity of $O_A$,\n\\item for $k = 1, \\dots ,n$ one has\n\\begin{equation} \ns_{k}^{*} s_k = \\sum_i A_{ki} s_i s_{i}^{*}.\n\\end{equation}\n\\end{enumerate}\nThe condition above, that the shift space be a Cantor set, guarantees\nthat the algebra described does not depend on the choice of the\npartial isometries, ~\\cite{cuntz-k1}.\nIf $A_{ij} = 1 $ for all $i,j$, then the algebra $O_A$ is denoted\n$O_n$.\n\nIn a similar manner we consider $O_{A^T}$, with generators $t_1,\n\\dots, t_n$ satisfying\n\\begin{equation} \nt_{k}^{*} t_k = \\sum_i A_{ik} t_i t_{i}^{*}\n\\end{equation}\nfor $k = 1, \\dots ,n$.\n\nOur aim in this section is to explicitely construct the elements\n\\begin{equation*}\n\\delta \\in KK^1(\\mathbb{C}, O_A \\otimes O_{A^T})\n\\end{equation*}\nand \n\\begin{equation*} \n\\Delta \\in KK^1(O_A \\otimes O_{A^T}, \\mathbb{C})\n\\end{equation*}\nwhich are needed to show that $O_A$ and $O_{A^T}$ are dual.\n\nThe construction of $\\delta$ is the easier of the two, (c.f. ~\\cite{cuntz-k2}).\nLet\n\\begin{equation} \nw = \\sum_{i=1}^{n} s_{i}^{*} \\otimes t_i \\in O_A \\otimes O_{A^T}.\n\\end{equation}\nThen one has\n\\begin{equation*} \nw^* w = w w^* = \\sum_{i,j} A_{ij} s_j s_{j}^{*} \\otimes t_i t_{i}^{*}.\n\\end{equation*}\nWe let $\\bar w \\colon C(S^1) \\to O_A \\otimes O_{A^T}$ denote both the\n(non-unital) map defined by\n\\begin{equation} \n\\bar w (z) = w\n\\end{equation}\nas well as its restriction to $C_0(\\mathbb{R}) \\subseteq C(S^1)$. \n\\begin{definition}\nLet $\\delta \\in KK^1(\\mathbb{C}, O_A \\otimes O_{A^T})$ be the element determined by\nthe homomorphism $\\bar w$.\n\\end{definition}\nThe element $\\Delta$ is constructed using the full Fock space of a finite\ndimensional Hilbert space. (For related constructions see the papers\nof D. Evans and D. Voiculescu, ~\\cite{voiculescu,evans}.)\n\nLet $\\mathcal{H}$ denote an n-dimensional Hilbert space with orthonormal basis\n$\\xi_1, \\dots, \\xi_n$. Let $\\mathcal{H}^{\\otimes m} = \\mathcal{H} \\otimes \\cdots \\otimes \\mathcal{H}$ be the\nm-fold tensor product of $\\mathcal{H}$ and let $\\mathcal{H}_0$ be a one dimensional\nHilbert space with unit vector $\\Omega$. Then the full Fock space of $\\mathcal{H}$,\n$\\mathcal{F}$, is defined to be \n\\begin{equation*} \n\\mathcal{F} = \\mathcal{H}_0 \\oplus (\\bigoplus_{n=1}^{\\infty} \\mathcal{H}^{\\otimes n})\n\\end{equation*}\nThere is a natural orthonormal basis for $\\mathcal{F}$,\n\\begin{equation*} \n\\{\\Omega, \\xi_{i_1}\\otimes \\cdots \\otimes \\xi_{i_m}|m=1,2,\\dots,\\qquad 1\\leq i_j\n\\leq n\\}. \n\\end{equation*}\nDefine the left and right creation operators, $L_1, \\dots , L_n$ and\n$R_1, \\dots ,R_n$, on $\\mathcal{F}$ by \n\\begin{equation*} \nL_k \\Omega = \\xi_k = R_k \\Omega\n\\end{equation*}\nand\n\\begin{align}\nL_k(\\xi_{i_1}\\otimes \\cdots \\otimes \\xi_{i_m} ) &= \\xi_k \\otimes \\xi_{i_1}\\otimes\n\\cdots \\otimes \\xi_{i_m} \\\\\nR_k(\\xi_{i_1}\\otimes \\cdots \\otimes \\xi_{i_m} ) &= \\xi_{i_1}\\otimes \\cdots \\otimes\n\\xi_{i_m} \\otimes \\xi_k\n\\end{align}\nNext, we bring in the matrix $A$. Let $\\mathcal{F}_A \\subseteq \\mathcal{F}$ denote the\nclosed linear span of the vectors $\\Omega$ and those $\\xi_{i_1}\\otimes\n\\cdots \\otimes \\xi_{i_m} $ satisfying the condition that\n$A_{{i_j},{i_{j+1}}} = 1$ for all $j = 1, \\dots , m-1$. Let $P_A$\ndenote the orthogonal projection of $\\mathcal{F}$ onto $\\mathcal{F}_A$. Let \n\\begin{align*}\nL_k^A &= P_A L_k P_A \\in \\mathcal{B}(\\mathcal{F}_A) \\\\\nR_k^A &= P_A R_k P_A \\in \\mathcal{B}(\\mathcal{F}_A)\n\\end{align*}\nfor $k= 1, \\dots, n$.\n\nIt is easily checked that one has the following formulas.\n\\begin{align*}\nL_k^A \\xi_{i_1}\\otimes \\cdots \\otimes \\xi_{i_m} &= A_{k,i_1} \\xi_k \\otimes \\xi_{i_1}\\otimes\n\\cdots \\otimes \\xi_{i_m} \\\\\nR_k^A \\xi_{i_1}\\otimes \\cdots \\otimes \\xi_{i_m} &= A_{i_m,k} \\xi_{i_1}\\otimes\n\\cdots \\otimes \\xi_{i_m} \\otimes \\xi_k \\\\\n(L_k^A)^* \\xi_{i_1}\\otimes \\cdots \\otimes \\xi_{i_m} &= A_{k,i_1} \\xi_{i_2}\\otimes\n\\cdots \\otimes \\xi_{i_m} \\\\ \n(R_k^A)^* \\xi_{i_1}\\otimes \\cdots \\otimes \\xi_{i_m} &= A_{i_m,k} \\xi_{i_1}\\otimes\n\\cdots \\otimes \\xi_{i_{m-1}}. \n\\end{align*}\nFrom this one easily obtains the following result.\n\\begin{prop}\n\\label{formulas}\nThe operators $R^A_k$ and $L^A_k$ are partial isometries and satisfy\n\\begin{enumerate}[i)]\n\\item $(L^A_k)^* L^A_k = \\sum_i A_{ki} L^A_i (L^A_i)^* + P_\\Omega$\n\\item $(R^A_k)^* R^A_k = \\sum_i A_{ik} R^A_i (R^A_i)^* + P_\\Omega$\n\\item $[L^A_k, R^A_l] = 0$\n\\item $[(L^A_k)^* , R^A_l] = \\delta_{kl} P_\\Omega$\n\\end{enumerate}\n\\end{prop}\nWe are now able to construct the element $\\Delta$. Let $\\mathcal{E} \\subseteq\n\\mathcal{B}(\\mathcal{F}_A)$ be the $C^*$-algebra generated by $\\{R^A_1, \\dots, R^A_n,\nL^A_1, \\dots , L^A_n\\}$. By Proposition~\\ref{formulas} the operator\n$P_\\Omega$, which is compact, is in $\\mathcal{E}$. It is easy to check that\nthere is no non-trivial $\\mathcal{E}$-invariant subspace of $\\mathcal{F}_A$. Thus, $\\mathcal{E}$\ncontains the compact operators, $\\mathcal{K}(\\mathcal{F}_A)$.\n\nModulo the ideal $\\mathcal{K}(\\mathcal{F}_A)$ the elements $L^A_1, \\dots , L^A_n$ and\n$R^A_1, \\dots, R^A_n$ satisfy the relations for $O_A$ and $O_{A^T}$\nrespectively. Moreover, the $L^A_i$'s and the $R^A_j$'s commute\nmodulo $\\mathcal{K}(\\mathcal{F}_A)$. It follows that the $C^*$-algebra $\\mathcal{E} \/ \\mathcal{K}(\\mathcal{F}_A)$\nis a quotient of $O_A \\otimes O_{A^T}$. In fact they are isomorphic. This\nfollows since both $O_A$ and $O_{A^T}$ are nuclear and the ideal\nstructure of their tensor product may be completely described in terms\nof the ideals of $O_A$ and $O_{A^T}$. These, in turn, have been\ncompletely described in ~\\cite{ cuntz-k2}. It is then straightforward to\nverify that the generators of the ideals of $O_A \\otimes O_{A^T}$ give rise\nto non-compact operators (via the $L^A_k$ and $R^A_k$) and thus $\\mathcal{E}\n\/\\mathcal{K}(\\mathcal{F}_A) \\cong O_A \\otimes O_{A^T}$.\n\\begin{definition}\nLet $\\Delta \\in KK^1(O_A \\otimes O_{A^T}, \\mathbb{C})$ be the class determined by the\nexact sequence\n\\begin{equation} \n\\begin{CD}\n0 @>>> \\mathcal{K}(\\mathcal{F}_A) @>>> \\mathcal{E} @>{\\pi_{A}}>> \\to O_A \\otimes O_{A^T} \\to 0.\n\\end{CD}\n\\end{equation}\n\\end{definition}\nNote that one has\n\\begin{equation*} \n\\pi_A (R^A_k) = 1 \\otimes t_k\n\\end{equation*}\nand \n\\begin{equation*} \n\\pi_A (L^A_k) = s_k \\otimes 1.\n\\end{equation*}\n\n\n\n\n \\section{Duality for Cuntz-Krieger algebras}\nIn this section we will show that the duality classes constructed in\nthe previous section actually implement a duality isomorphism for the\nalgebras $O_A$ and $O_{A^T}$. According to Proposition ~\\ref{hyp}, it\nwill be sufficient to construct homomorphisms\n\\begin{equation}\n\\begin{align}\n\\Theta_{O_A} \\colon O_A \\otimes \\mathcal{S} \\to O_A \\otimes \\mathcal{S} \\\\\n\\Theta_{O_{A^T}} \\colon O_{A^T} \\otimes \\mathcal{S} \\to O_{A^T} \\otimes \\mathcal{S}\n\\end{align}\n\\end{equation}\nwhich satisfy the conditions stated there. That is, we\nmust show that $\\Theta_{O_A}$ and $\\Theta_{O_{A^T}}$ induce the identity\nhomomorphism on K-theory and satisfy the second condition in\nProposition ~\\ref{hyp} which states\n\\begin{align*}\n\\sigma_{12}(\\delta \\otimes_{O_{A^T}} \\sigma_{12}(\\Delta)) &= \\Theta_{O_A} \\otimes_{{O_A} \\otimes \\mathcal{S}} \\tau^{O_A}\n(\\mathcal{T}_0) \\\\\n\\sigma_{12}(\\delta \\otimes_{O_A} \\sigma_{12}(\\Delta)) &= \\Theta_{O_{A^T}} \\otimes_{{O_{A^T}} \\otimes \\mathcal{S}} \\tau^{O_{A^T}}\n(\\mathcal{T}_0)\n\\end{align*}\nWe will work out the details only for $\\Theta_{O_A}$, the other case\nbeing similar.\n\nTo define $\\Theta_{O_A}$ we first set\n\\begin{equation*}\n\\bar{\\Theta} \\colon O_A \\otimes C(S^1) \\to O_A \\otimes C(S^1)\n\\end{equation*}\nby\n\\begin{align*}\n\\bar{\\Theta} (1 \\otimes z) = 1 \\otimes z \\\\\n\\bar{\\Theta} (s_i \\otimes 1) = s_i \\otimes z.\n\\end{align*}\nThen $\\bar{\\Theta} $ extends to an automorphism of $O_A \\otimes\nC(S^1)$, as follows from the universal property of $O_A$.\nThe diagram\n\\begin{equation*}\n\\begin{CD}\nO_A \\otimes C(S^1) @>{\\bar{\\Theta}}>> O_A \\otimes C(S^1)\\\\\n@V{1_{O_A}} \\otimes \\pi VV @V{1_{O_A}} \\otimes \\pi VV \\\\\nO_A @>id>> O_A\n\\end{CD}\n\\end{equation*}\ncommutes, where $\\pi \\colon C(S^1) \\to \\mathbb{C}$ is defined by $\\pi_1 (z) =\n1$. It follows that we may define $\\Theta_{O_A} = \\bar{\\Theta} |\n\\ker(1_{O_A} \\otimes \\pi)$. It is\nan automorphism of $O_A \\otimes \\mathcal{S}$. We now must show that\n$\\Theta_{O_A}$ satisfies the necessary conditions.\n\\begin{theorem}\nThe maps\n\\begin{equation*}\n{\\Theta_{O_A}}_* \\colon K_i(O_A \\otimes \\mathcal{S}) \\to K_i(O_A \\otimes \\mathcal{S})\n\\end{equation*}\nare the identity for $i = 0 , 1$\n\\end{theorem}\n\\begin{proof}\nRecall that\n\\begin{equation*}\nO_A \\otimes \\mathcal{K} \\cong \\bar F_A \\rtimes_{\\sigma_A} \\mathbb{Z}\n\\end{equation*}\nwhere $\\bar F_A$ is a stable $AF$-algebra with automorphism\n$\\sigma_A$. In this situation, $O_A$ is actually a full corner in\n$\\bar O_A$ and compressing $\\bar F_A$ to this corner yields $\\bar F_A \\subseteq O_A $ which is the\nclosure of the ``balanced words'' in the $s_i$'s as described in\n~\\cite{ cuntz-k1}. Observe that the restriction of $\\bar{\\Theta} $ to\n$F_A \\otimes C(S^1)$ is the identity. We will apply the\nPimsner-Voiculescu exact sequence to compute $K_*(O_A \\otimes \\mathcal{S})$,\nmaking necessary modifications since $\\bar F_A$ is not unital and then\nstudy ${\\Theta_{O_A}}_*$.\n\nLet $B$ denote the multiplier algebra of $\\bar F_A \\otimes \\mathcal{S} \\otimes \\mathcal{K}$\nwhere $\\mathcal{K} = \\mathcal{K}(l^2(\\mathbb{N}))$. Let $e_{ij}$ denote the standard matrix units in\n$\\mathcal{K}$. Define $\\rho \\colon \\bar F_A \\otimes \\mathcal{S} \\to B$ by\n\\begin{equation*}\n\\rho ( a \\otimes b) = \\sum_{i \\in \\mathbb{N}} \\sigma^{i}_A (a) \\otimes f \\otimes e_{ii}\n\\end{equation*}\nwhere the sum is taken in the strict topology. Let $S$ denote the\nunilateral shift on $\\ell^2(\\mathbb{N})$. Let $D$ denote the\n$C^*$-algebra generated by $\\bar F_A \\otimes \\mathcal{S} \\otimes \\mathcal{K}$, $1 \\otimes 1 \\otimes S$\nand $\\{ \\rho(a \\otimes f) | f \\in \\mathcal{S}, a \\in \\bar F_A\\}$. Let $D_0$ be\nthe ideal in $D$ generated by $\\bar F_A \\otimes \\mathcal{S} \\otimes \\mathcal{K}$ and $\\{ \\rho(a\n\\otimes f) | f \\in \\mathcal{S}, a \\in \\bar F_A\\}$. There is an exact sequence\n\\begin{equation*}\n0 \\to \\bar F_A \\otimes \\mathcal{S} \\otimes \\mathcal{K} \\to D_0 \\to \\mathcal{S} \\otimes (\\bar F_A \\rtimes \\mathbb{Z})\n\\to 0.\n\\end{equation*}\nMoreover, the two maps\n\\begin{equation*}\nj \\colon \\bar F_A \\otimes \\mathcal{S} \\to \\bar F_A \\otimes \\mathcal{S} \\otimes \\mathcal{K}\n\\end{equation*}\ndefined by $j(a \\otimes f) = a \\otimes f \\otimes e_{11}$\nand\n\\begin{equation*}\n\\rho \\colon \\bar F_A \\otimes \\mathcal{S} \\to D\n\\end{equation*}\nboth induce isomorphisms on K-theory.\n\nFinally, we have\n\\begin{equation}\nK_0(\\bar F_A \\otimes \\mathcal{S}) \\cong K_1(\\bar F_A)= 0\n\\end{equation}\nsince $\\bar F_A$ is an AF-algebra. Putting this together, we obtain\nthe Pimsner-Voiculescu sequence for the $\\bar O_A$'s:\n\\begin{equation*}\n0 \\to K_1(\\bar O_A \\otimes \\mathcal{S}) \\to K_1(\\bar F_A \\otimes \\mathcal{S}) \\to K_1(\\bar F_A \\otimes\n\\mathcal{S}) \\to K_0(\\bar O_A \\otimes \\mathcal{S}) \\to 0\n\\end{equation*}\n\nWe define an automorphism $\\tilde \\Theta$ of $D$ by\n\\begin{equation}\n\\tilde \\Theta = ad (\\sum_{i \\in \\mathbb{N}} 1 \\otimes z^i \\otimes e_{ii})\n\\end{equation}\nwhere, again, the sum is in the strict topology. Notice that $\\tilde\n\\Theta \\circ \\rho = \\rho$ and $\\tilde \\Theta | (\\bar F_A \\otimes \\mathcal{S} \\otimes \\mathcal{K})$\nis approximately inner and hence trivial on K-theory. Also observe\nthat $\\tilde \\Theta | (\\bar F_A \\otimes \\mathcal{S} \\otimes \\mathcal{K}) = \\bar F_A \\otimes \\mathcal{S} \\otimes\n\\mathcal{K}$ and that the automorphism of the quotient of $D_0$ by $\\bar F_A\n\\otimes \\mathcal{S} \\otimes \\mathcal{K}$ induced by $\\tilde \\Theta $ is precisely $\\Theta_{O_A}\n$,\nafter identifying this quotient with $\\bar O_A \\otimes \\mathcal{S}$ and restricting to\n$O_A \\otimes \\mathcal{S} \\subseteq \\bar O_A \\otimes \\mathcal{S}$. We have a commutative diagram\n\\begin{equation*}\n\\begin{CD}\n0 @>>> K_0(\\bar O_A \\otimes \\mathcal{S}) @>>> K_1(\\bar F_A \\otimes \\mathcal{S}) @>>> K_1(\\bar F_A\n\\otimes \\mathcal{S}) @>>> K_1(\\bar O_A \\otimes \\mathcal{S}) @>>>0 \\\\\n@. @VV{{\\Theta_{O_A}}_*}V @VV{\\tilde \\Theta_*}V @VV{\\tilde \\Theta_*}V\n@VV{{\\Theta_{O_A}}_*}V\\\\\n0 @>>> K_0(\\bar O_A \\otimes \\mathcal{S}) @>>> K_1(\\bar F_A \\otimes \\mathcal{S}) @>>> K_1(\\bar F_A\n\\otimes \\mathcal{S}) @>>> K_1(\\bar O_A \\otimes \\mathcal{S}) @>>>0\n\\end{CD}\n\\end{equation*}\nFrom the observations above, we have both maps $\\tilde \\Theta_* = id$,\nand it follows that ${\\Theta_{O_A}}_* $ is the identity.\n\\end{proof}\nIt remains for us to verify that condition\n~\\eqref{thiii} is satisfied. To that end we observe first that\n\\begin{equation*}\n\\sigma_{12}(\\delta \\otimes_{O_{A^T}} \\sigma_{12}(\\Delta)) = \\Theta_{O_A} \\otimes_{{O_A} \\otimes \\mathcal{S}} \\tau^{O_A}\n(\\mathcal{T}_0)\n\\end{equation*}\nis equivalent to\n\\begin{equation}\n\\label{81}\n\\tau_{\\mathcal{S}}((\\Theta_{O_A})^{-1}) \\otimes \\sigma_{12} \\tau_{\\mathcal{S}} \\tau_{O_A}\n(\\delta) \\otimes \\tau^{O_A} (\\sigma_{12}(\\Delta) ) = \\tau^{O_A}\n(\\mathcal{T}_0).\n\\end{equation}\nThus, we will prove the latter statement.\n\n\nNow, $\\tau^{O_A}(\\sigma_{12}(\\Delta) ) \\in KK^1(O_A \\otimes O_{A^T} \\otimes O_A, O_A)$ was obtained from the\nextension\n\\begin{equation*}\n0 \\rar{}{} \\mathcal{K} \\otimes O_A \\rar{}{} \\mathcal{E} \\otimes O_A \\rar{\\pi_A \\otimes 1_{O_A}}{} O_A \\otimes O_{A^T} \\otimes O_A \\rar{}{} 0.\n\\end{equation*}\nMoreover, the remaining term\n\\begin{equation*}\n\\tau_{\\mathcal{S}}((\\Theta_{O_A})^{-1}) \\otimes \\sigma_{12} \\tau_{\\mathcal{S}} \\tau_{O_A} (\\delta)\n\\end{equation*}\nactually yields a $*$-homomorphism from $ O_A \\otimes \\mathcal{S} \\otimes \\mathcal{S}$ to $\nO_A \\otimes O_{A^T} \\otimes O_A \\otimes \\mathcal{S}$. Thus, the left side of ~\\eqref{81} is\nrepresented by applying $\\tau_{\\mathcal{S}}$ to the element represented by the top row of the following diagram\n\\begin{equation*}\n\\begin{CD}\n0 @>>> \\mathcal{K} \\otimes O_A @>>> \\mathcal{E}' @>>> O_A \\otimes \\mathcal{S} @>>>0 \\\\\n@. @VVV @VVV @VV{1_{O_A} \\otimes i}V \\\\\n0 @>>> \\mathcal{K} \\otimes O_A @>>> \\mathcal{E}'' @>>> O_A \\otimes C(S^1) @>>>0 \\\\\n@. @VVV @VVV @VV{\\bar \\alpha}V \\\\\n0 @>>> \\mathcal{K} \\otimes O_A @>>> \\mathcal{E} \\otimes O_A @>>> O_A \\otimes O_{A^T} \\otimes O_A @>>> 0\n\\end{CD}\n\\end{equation*}\nwhere $\\alpha = (\\Theta_{O_A})^{-1} \\otimes \\tau_{O_A} (\\delta) = \\bar\n\\alpha \\circ (1_{O_A} \\otimes i)$, and $\\alpha \\otimes 1_{\\mathcal{S}} = \\tau_{\\mathcal{S}}((\\Theta_{O_A})^{-1}) \\otimes \\sigma_{12} \\tau_{\\mathcal{S}} \\tau_{O_A} (\\delta)$.\n\n\nThe crucial step is to untwist the middle row by finding an isomorphism $\\mathcal{E}'' \\cong \\mathcal{T} \\otimes O_A$ so\nthat the following diagram commutes\n\\begin{equation*}\n\\begin{CD}\n0 @>>> \\mathcal{K} \\otimes O_A @>>> \\mathcal{E}'' @>>> O_A \\otimes C(S^1) @>>> 0 \\\\\n@. @V{=}VV @V{\\cong}VV @V{\\sigma_{12}}VV \\\\\n0 @>>> \\mathcal{K} \\otimes O_A @>>> \\mathcal{T} \\otimes O_A @>>{\\pi_{\\mathcal{T}}} \\otimes 1_{O_A}> C(S^1) \\otimes O_A @>>> 0,\n\\end{CD}\n\\end{equation*}\nwhere $\\mathcal{T}$ is the Toeplitz extension.\n\nAssuming this, the proof can be completed as follows. We have\n\\begin{equation*}\n\\tau_{O_A}(\\mathcal{T}) = \\sigma_{12} \\bar \\alpha^* (\\tau^{O_A} (\\sigma_{12}(\\Delta) )).\n\\end{equation*}\n Hence, one has\n\\begin{align*}\n\\tau_{O_A}(\\mathcal{T}_0) &= (i \\otimes 1_{O_A})^* (\\tau^{O_A}(\\mathcal{T})) \\\\\n&= (i \\otimes 1_{O_A})^* \\sigma_{12} \\bar \\alpha^* (\\tau^{O_A}(\\sigma_{12}(\\Delta) )) \\\\\n&= \\sigma_{12} (1_{O_A} \\otimes i)^* \\bar \\alpha^* (\\tau^{O_A}(\\sigma_{12}(\\Delta) )) \\\\\n&= \\sigma_{12} \\alpha^* (\\tau^{O_A}(\\sigma_{12}(\\Delta) )).\n\\end{align*}\nThus, substituting in for $\\alpha$, we obtain\n\\begin{equation*}\n\\tau_{O_A}(\\mathcal{T}_0) = \\tau_{\\mathcal{S}} ((\\Theta_{O_A})^{-1}) \\otimes \\sigma_{12}\n\\tau_{\\mathcal{S}} \\tau^{O_A}(\\delta) \\otimes \\tau^{O_A}(\\sigma_{12}(\\Delta) ),\n\\end{equation*}\nwhich is the desired formula.\n\nWe now turn to the issue of obtaining the explicit isomorphism between\n$\\mathcal{E}''$ and $\\mathcal{T} \\otimes O_A$. For convenience, we will suppress the $A$ in our\nnotation from the elements such as ${R_i}^A$, and ${L_i}^A$. Define $W$ in $\\mathcal{E} \\otimes\nO_A$ by\n\\begin{equation*}\nW = \\sum_{i=1}^{n} R_i \\otimes {s_i}^*.\n\\end{equation*}\nWe will need two technical lemmas.\n\\begin{lem}\n\\label{43}\nOne has\n\\begin{enumerate}[i)]\n\\item $\\pi \\otimes 1_{O_A} (W) = \\bar \\alpha (1 \\otimes z)$.\n\\label{i}\n\\item $W^* W = \\sum_{i,j} A_{ji} R_j {R_j}^* \\otimes s_i {s_i}^* +\nP_{\\Omega} \\otimes 1$.\n\\item $[W^* , W] = P_{\\Omega} \\otimes 1$.\n\\item $(P_{\\Omega} \\otimes 1) W = 0$.\n\\item $[W, L_k \\otimes 1] = 0$ for $k = 1,\\dots,n$.\n\\item $[W^* , L_k \\otimes 1] = P_{\\Omega} \\otimes s_k$ for $k = 1,\\dots,n$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nFor (\\ref{i}), one proceeds as follows. Note first that \n\\begin{align*}\n \\bar \\alpha ( 1 \\otimes z) &= \\sigma^{23} (\\bar \\Theta_{O_A}^{-1})^* (1\n \\otimes \\bar \\omega (z)) \\\\\n&= \\sigma^{23} (1 \\otimes \\bar \\omega (z)) \\\\\n&= \\sigma^{23} ( \\sum_i 1 \\otimes s_{i}^{*} \\otimes t_i) \\\\\n&= \\sum_i 1 \\otimes t_i \\otimes s_{i}^{*}.\n\\end{align*}\nMoreover, \n \\begin{align*}\n (\\pi \\otimes 1_{O_A})(W) &= (\\pi \\otimes 1_{O_A}) (\\sum_i R_i \\otimes\n s_{i}^{*}) \\\\\n &= \\sum_i \\pi(R_i) \\otimes s_{i}^{*} \\\\\n &= \\sum_i 1 \\otimes t_i \\otimes s_{i}^{*}.\n \\end{align*}\nThe remaining parts of lemma can be verified in a routine manner.\n\\end{proof}\n\nThe remaining facts we need are incorporated into the following.\n\\begin{lem}\n\\label{44}\nLet $V_k = W^* (L_k \\otimes 1)$, for $k = 1 \\dots n$. Then we have, for\neach $k$,\n\\begin{description}\n\\item[i)] $\\pi \\otimes 1_{O_A} (V_k) = \\bar \\alpha (s_k \\otimes 1)$,\n\\label{ii}\n\\item[ii)] $\\sum_j V_j V^{*}_{j} = W^* W$,\n\\item[iii)] $V_{k}^{*} V_k = \\sum_j A_{kj} V_j V^{*}_{j} $,\n\\item[iv)] $[W, V_k] = 0$,\n\\item[v)] $[W^* , V_k] = 0$.\n\\end{description}\n\\end{lem}\n\\begin{proof}\n As in the previous lemma, we will verify (\\ref{ii}) and leave the\n remaining parts of the proof to the reader, since they are\n essentially routine. For ~\\ref{ii}, we check\n \\begin{align*}\n (\\pi \\otimes 1_{O_A}) (V_k) &= (\\pi \\otimes 1_{O_A})(W^*)(\\pi \\otimes\n 1_{O_A})(L_k \\otimes 1) \\\\\n &= \\bar \\alpha (1 \\otimes z) (s_k \\otimes 1 \\otimes 1),\n \\end{align*}\n and \n \\begin{align*}\n \\bar \\alpha (s_k \\otimes 1) &= \\sigma^{23} (1_{O_A} \\otimes \\bar w)\n (\\bar \\Theta_{A}^{-1})^* (s_k \\otimes 1) \\\\\n &= \\sigma^{23} (1_{O_A} \\otimes \\bar w) (s_k \\otimes 1) \\\\\n &= \\sigma^{23} (1_{O_A} \\otimes w) (s_k \\otimes 1 \\otimes 1) \\\\\n &= \\bar \\alpha (1 \\otimes z) (s_k \\otimes 1 \\otimes 1).\n \\end{align*}\n \\end{proof}\n\nNow we may define the isomorphism from $\\mathcal{T} \\otimes O_A$ to $\\mathcal{E} ''$. Let\n$S$ denote the unilateral shift. The required map is defined by\nsending $S \\otimes 1$ to $W$, and $1\n\\otimes S_k$ to $V_k$. Note that the unit of $\\mathcal{T} \\otimes O_A$ is mapped to\n$W^* W$ in $\\mathcal{E}''$. The fact that this assignment extends to a *-homomorphism\nfollows from the universal properties of $\\mathcal{T}$ and $O_A$. The fact\nthat it is onto follows from observing that $\\mathcal{E} ''$ is generated by $\\{\nW, V_1, \\dots , V_k \\}$ which is straightforward. Finally, the\nfact that the appropriate diagram commutes follows from ~\\ref{44} and\n~\\ref{43}.\n\n\n\n\\section{Final comments}\n\\begin{enumerate}\n\\item As mentioned earlier, the duality theorem holds for the stable\n and unstable Ruelle algebras associated to a Smale\n space,~\\cite{kaminker-p2}. However, the duality classes, $\\Delta$\n and $\\delta$, must be constructed in a different way. This is done\n using asymptotic morphisms and uses the fact that locally the Smale\n space decomposes into a product of expanding and contracting sets.\n It would be very interesting to have a Fock space construction of\n the more general classes as well.\n\\item The duality result for Cuntz-Krieger algebras sheds some light\n on the computations of the K-theory of $O_A$'s as in\n ~\\cite{cuntz-k2}. Recall that if $A$ is an $n \\times n$ aperiodic\n matrix of $0$'s and $1$'s , then there are {\\em canonical}\n isomorphisms\n\\begin{align*}\nK_0(O_A) \\cong \\mathbb{Z}^n \/ (1-A^T) \\mathbb{Z}^n \\\\\nK_1(O_A) \\cong \\ker(1-A^T) \\\\\nK^0(O_A) \\cong \\ker(1-A) \\\\\nK^1(O_A) \\cong \\mathbb{Z}^n \/ (1-A) \\mathbb{Z}^n.\n\\end{align*}\nNote that $\\mathbb{Z}^n \/ (1-A) \\mathbb{Z}^n \\cong \\mathbb{Z}^n \/ (1-A^T) \\mathbb{Z}^n$ by the\nstructure theorem for finitely generated abelian groups, but the\nisomorphism is not natural.\nThe explanation for why one has $A^T$ in the formulas now comes from\nduality, since one has the diagram\n\\begin{equation*}\n\\begin{CD}\nK_0(O_A) @>\\cong>> K^1(O_{A^T}) \\\\\n@VVV @VVV \\\\\n\\mathbb{Z}^n \/ (1-A^T) \\mathbb{Z}^n @>{=}>> \\mathbb{Z}^n \/ (1-A^T) \\mathbb{Z}^n.\n\\end{CD}\n\\end{equation*}\n\\end{enumerate}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION }\n\nIn cosmological models inflation is realized by a slowly rolling scalar field, the so called inflaton, whose energy density dominates the early history Universe \\cite{Guth:1980zm,Linde:1981mu,Mukhanov:1981xt,Albrecht:1982wi}. \nAmong several suggestions regarding its origin, the economical scenario that this field can be identified with the \nStandard Model (SM) Higgs state $\\mathrm{h}$, has received considerable attention\\cite{Bezrukov:2007ep}. In this approach, the\n Higgs field drives inflation through its strong coupling, $\\upxi \\mathrm{h}^2 R$, where $R$ is the Ricci scalar and $\\upxi$\n is a dimensionless parameter that acquires a large value, $\\upxi\\gtrsim 10^4$. \n \n\n \n\nIn modern particle physics theories, cosmological inflation is usually described within the framework of supergravity or \nsuperstring grand unified theories (GUTs). In these theories the SM is embedded in a higher gauge symmetry and the field content including \nthe Higgses are incorporated in representations of the higher symmetry which includes the SM gauge group. In this context, \nseveral new facts and constraints should be taken into account. For instance, since new symmetry breaking stages are involved, \nthe Higgs sector is usually extented and alternative possibilities for identifying the inflaton emerge. In addition, the effective \npotential has a specific structure constrained from fundamental principles of the theory. In string theory effective models, for\n example, in a wide class of compactifications the scalar potential appears with a no-scale structure as in standard supergravity \ntheories \\cite{Cremmer:1983bf, Lahanas:1986uc}. In general, the scalar potential is a function of the various fields which enter in a complicated \nmanner through the superpotential $W$ and the K\\\"ahler potential $K$. Thus, a rather detailed investigation is required to determine \nthe conditions for slow roll inflation and ensure a stable inflationary trajectory in such models. Modifications of the basic no-scale K\\\"ahler potential and various choices for the superpotential have been\nstudied leading to a number of different inflationary cases \\cite{Ellis:2013xoa}-\\cite{Romao:2017uwa}, while studies of inflation within supergravity in a model independent way can be found in \\cite{Covi:2008cn, Hardeman:2010fh}.\n\n\nIn the present work we implement the scenario of Higgs inflation in a model based on the Pati-Salam gauge symmetry $SU(4)_{C}\\times SU(2)_L\n\\times SU(2)_R$ \\cite{Pati:1974yy} (denoted for brevity with 4-2-2). This model has well known attractive features (see for example the \nrecent review \\cite{Pati:2017ysg}) and has been successfully rederived in superstring and D-brane \ntheories \\cite{Antoniadis:1988cm, Cvetic:2004ui, Anastasopoulos:2010ca, Cvetic:2015txa}. Early universe cosmology and inflationary predictions of the model (or its extensions) have been discussed previously in several works \\cite{Jeannerot:2000sv, Pallis:2011gr, Bryant:2016tzg}. Here we consider a supersymmetric version of the 4-2-2 model where the breaking down to the SM gauge group takes place in two steps. First $SU(4)$ breaks \nspontaneously at the usual supesymmetric GUT scale $M_{GUT}\\gtrsim 10^{16}$ GeV, down to the \\emph{left-right} group\\footnote{For a recent discussion on left-right models based on GUTs, see \\cite{Chakrabortty:2017mgi}. Inflation from an $SO(10)$ model with left-right intermediate symmetry is analysed in \\cite{Garg:2015mra}.} via the adjoint representation. Then, depending on the specific structure of the Higgs\nsector, the $SU(2)_R$ scale can break either at the GUT scale, i.e., simultaneously with $SU(4)$, or at some lower, intermediate energy scale. \nThe variety of possibilities are reflected back to the effective field theory model implying various interesting phenomenological \nconsequences. Regarding the Higgs inflation scenario, in particular, the inflaton field can be identified with the neutral components of the $SU(2)_{R}$ doublet fields\nassociated with the intermediate scale symmetry breaking. In this work we will explore alternative possibilities to realise inflation \nwhere the inflaton is identified with the $SU(2)_{R}$ doublets. We also examine the case of inflation in the presence of the adjoint representation.\n\n\nThe layout of the paper is as follows. In section 2, we present a brief description of the 4-2-2 model, focusing in its particle content\nand the symmetry breaking pattern. In sections 3 we present the superpotential and the emergent no-scale supergavity K\\\"ahler potential\nof the effective model. We derive the effective potential and analyse the predictions on inflation when either the $SU(2)_{R}$ doublets or the adjoint play the r\\^ole of the inflaton. We present our conclusions in section 4. \n\n\n\n\n\n\\section{DESCRIPTION OF THE MODEL}\nIn this section we highlight the basic ingredients of the model with gauge symmetry,\n\\be \\label{psgroup}\n SU(4)_{C}\\times{SU(2)_{L}}\\times{SU(2)_{R}}~\\cdot\n \\ee\n \\noindent This model unifies each family of quarks and leptons into two irreducible representations,\n $F_{i}$ and $\\bar{F}_{i}$ transforming as \\cite{King:1997ia}\n \\[F_{i}=(4,2,1)_{i}\\quad{\\text{and}}\\quad \\bar{F}_{i}=(\\overline{4},1,2)_{i}~,\\]\n\n\\noindent under the corresponding factors of the gauge group~(\\ref{psgroup}). Here the subscript $i$ ($i=1,2,3$) denotes family index. \n Note that $F+\\bar{F}$ comprise the $16$ of $SO(10)$, $16\\rightarrow{(4,2,1)+(\\overline{4},1,2)}$.\nThe explicit embedding of the SM matter fields, including the right-handed neutrino is as follows:\t\n\\be\nF_{i}=\n\\begin{pmatrix} \nu_r & u_g & u_b & \\nu \\\\\nd_r & d_g & d_b & e\n\\end{pmatrix}_{i}\\quad{,}\\quad{\\bar{F}_{i}=\n\\begin{pmatrix} \nu^{c}_r & u^{c}_g & u^{c}_b & \\nu^{c} \\\\\nd^{c}_r & d^{c}_g & d^{c}_b & e^{c}\n\\end{pmatrix}_{i}}~,\n\\ee\n\n\n\\noindent where the subscript $(r,g,b)$ are color indices.\n\n The symmetry breaking \n\\be\n SU(4)_{C}\\times{SU(2)_{R}}\\rightarrow{SU(3)_{C}\\times{U(1)_{Y}}}~,\n\\ee\n\n\n\\noindent is achieved by introducing two Higgs multiplets \n\n\\be\\label{HiggsofPS}\nH=(\\overline{4},1,2)=\n\\begin{pmatrix} \nu_{H}^{c} & u_{H}^{c} & u_{H}^{c} & \\nu_{H}^c \\\\\nd_{H}^{c} & d_{H}^{c} & d_{H}^{c} & e_{H}^{c}\n\\end{pmatrix}\\quad{,}\\quad{\\bar{H}=(4,1,2)=\n\\begin{pmatrix} \n\\ov{u}_{H}^{c} & \\ov{u}_{H}^{c} & \\ov{u}_{H}^{c} & \\ov{\\nu}_{H}^c \\\\\n\\ov{d}_{H}^{c} & \\ov{d}_{H}^{c} & \\ov{d}_{H}^{c} & \\ov{e}_{H}^{c}\n\\end{pmatrix}}\n\\ee\n which descend from the $16$ and $\\overline{16}$ of $SO(10)$ respectively. \n\n\nAn alternative way to break the gauge symmetry arises in the case where the adjoint \nscalar $\\Sigma=(15,1,1)$ is included in the spectrum.\n We parametrise $\\Sigma$ with a singlet scalar field $S$ \n\\ba \n\\Sigma\\equiv (15,1,1) &=&\n\\frac{S}{2\\sqrt{3}}\\left(\n\t\\begin{array}{cccc}\n\t\t1&0 & 0 &0\\\\\n\t\t0 &1& 0&0 \\\\\n\t\t0 & 0 & 1&0\\\\\n\t\t0 & 0 &0& -3\n\t\\end{array}\n\t\\right)~,\\label{Adj}\n\\ea\nwhich acquires a GUT scale vacuum expectation value (vev) $\\langle{S}\\rangle\\equiv\\upsilon\\simeq{3\\times{10^{16}}}$ GeV \n breaking $SU(4)\\to SU(3)\\times U(1)$. The breaking leads to the left-right symmetric group, \n $SU(3)_{C}\\times{SU(2)_{L}}\\times{SU(2)_{R}}\\times{U(1)_{B-L}}$, and the decomposition of the Higgs fields $H, \\bar{H}$ \n is as follows:\n\\begin{eqnarray}\\label{Hbreaking}\n\\begin{split}\nH(\\ov{4},1,2)&\\rightarrow{Q_{H}(\\ov{3},1,2)_{-1\/3}+L_{H}(1,1,2)_{1}}\\\\\n\\bar{H}(4,1,2)&\\rightarrow{\\ov{Q}_{H}(3,1,2)_{1\/3}+\\ov{L}_{H}(1,1,2)_{-1}}\n\\end{split}\n\\end{eqnarray}\n\n\n\n\n\\noindent where $Q_{H}=(u_{H}^{c}\\quad{d_{H}^{c}})^{T}$, $\\ov{Q}_{H}=(\\ov{u}_{H}^{c}\\quad{\\ov{d}_{H}^{c}})$ \nand $L_{H}=(\\nu_{H}^{c}\\quad{e_{H}^{c}})^{T}$, $\\ov{L}_{H}=(\\ov{\\nu}_{H}^{c}\\quad{\\ov{e}_{H}^{c}})$. \n\nThe right-handed doublets $L_{H},\\ov{L}_{H}$, acquiring vev's along their neutral components ${\\nu}_{H}^c ,\n \\ov{\\nu}_{H}^c $ and as a result they break the\n$SU(2)_R$ symmetry at some scale $M_R$. This way we obtain the symmetry breaking pattern~\\cite{Anastasopoulos:2010ca}:\n\\[\nSU(4)_{C}\\times{SU(2)_{R}}\\times{SU(2)_{L}}\\rightarrow{SU(3)_{C}\\times{U(1)_{B-L}}}\\times{SU(2)_{R}}\\times{SU(2)_{L}}\\to \n{SU(3)}\\times{SU(2)_{L}}\\times{U(1)_{Y}}.\n\\]\n The two scales $M_{GUT}$ and $M_R$ are not related to each other and it is in principle possible to \n take $M_R$ at some lower scale provided there is no conflict with observational data such as \n flavour changing neutral currents and lepton or baryon number violation. \nRegarding the fast proton decay problem, in particular, in 4-2-2 models, due to absence of the associated gauge bosons\nthere are no contributions from dimension six (d-6) operators, and related issues from d-5 operators can be remedied with \nappropriate symmetries in the superpotential. \n\n\nThe remaining spectrum and its $SO(10)$ origin is as follows: The decomposition of the $10$ representation of $SO(10)$, \ngives a bidoublet and a sextet field, transforming under the 4-2-2 symmetry as follows\n\n\\be \n10\\rightarrow{h(1, 2, 2)+D_{6}(6,1,1)}~\\cdot \\label{10toHD}\n\\ee \n\n\\noindent\nThe two Higgs doublets of the minimal supersymmetric standard model (MSSM) descend from the bidoublet\n\n\\be\nh=(1,2,2)=\n\\begin{pmatrix}\nh_{2}^{+} & h_{1}^{0}\\\\\nh_{2}^{0} & h_{1}^{-}\n\\end{pmatrix}.\n\\ee\n\n\\noindent Also, the sextet of (\\ref{10toHD}) decomposes into a pair of coloured triplets: $D_{6}\\rightarrow{D_{3}(3,1,1)+\\overline{D}_{3}(\\ov{3},1,1)}$.\n\nCollectively we have the following SM assignments:\n\n\\begin{equation}\n\\begin{split}\nF&=(4,2,1)\\rightarrow Q(3,2,\\frac{1}{6})+L(1,2,-\\frac{1}{2})\\\\\n\\bar{F}&=(\\ov{4},1,2)\\rightarrow u^{c}(\\ov{3},1,-\\frac{2}{3})+d^{c}(\\ov{3},1,\\frac{1}{3})+e^{c}(1,1,1)+\\nu^{c}(1,1,0)\\\\\nh&=(1,2,2)\\rightarrow H_{u}(1,2,\\frac{1}{2})+H_{d}(1,2,-\\frac{1}{2})\\\\\nH&=(\\ov{4},1,2)\\rightarrow u^{c}_{H}(\\ov{3},1,-\\frac{2}{3})+d^{c}_{H}(\\ov{3},1,\\frac{1}{3})+e^{c}_{H}(1,1,1)+\\nu^{c}_{H}(1,1,0)\\\\\n\\bar{H}&=(4,1,2)\\rightarrow \\ov{u}^{c}_{H}(3,1,\\frac{2}{3})+\\ov{d}^{c}_{H}(3,1,-\\frac{1}{3})+\\ov{e}^{c}_{H}(1,1,-1)+\\ov{\\nu}^{c}_{H}(1,1,0)\\\\\nD_{6}&=(6,1,1)\\rightarrow{D_{3}(3,1,-\\frac{1}{3})+\\overline{D}_{3}(\\ov{3},1,\\frac{1}{3})}\n\\end{split}\n\\end{equation}\n\n\nFermions receive Dirac type masses from a common tree-level invariant term, $F\\bar{F}h$, whilst right-handed (RH) neutrinos receive heavy Majorana contributions from\nnon-renormalisable terms, to be discussed in the next sections. In addition, the colour triplets $d_{H}^{c}$ and $\\ov{d}_{H}^{c}$ are combined with the $D_{3}$ and $\\ov{D}_{3}$ states via the trilinear operators $HHD_{6}+\\bar{H} \\bar{H}D_{6}$ and get masses near the GUT scale.\n\n\n\nAfter the short description of the basic features of the model, in the following sections we investigate various inflationary scenarios in the context of no-scale supergravity, by applying the techniques presented in \\cite{Ellis:2014dxa, Ellis:2016spb}.\n\n\n\\section{INFLATION IN NO SCALE SUPERGRAVITY}\n\nIn this section we consider the 4-2-2 model as an effective string theory model and study the implications of Higgs inflation. \n The `light' spectrum in these constructions contains the MSSM states in representations transforming non-trivially under the \n gauge group and a number of moduli fields associated with the particular compactification. We will focus on the superpotential and the K\\\"ahler potential which are essential for the study of inflation. \n\nThe superpotential is a holomorphic function of the fields. Ignoring Yukawa interaction terms, the most general superpotential up to dimension four which is relevant to our discussion is \n\n \n \n \n \\begin{eqnarray}\n \\begin{split}\\label{wscalar}\n W&=M\\bar{H}H + \\mu\\bar{h}h + m \\tr(\\Sigma)^{2}+n \\bar{H}\\Sigma H+ c\\tr\\left(\\Sigma^{3}\\right) \\\\\n &-\\alpha \\left(\\bar{H} H\\right)^{2}-\\beta\\left(\\bar{h}h\\right)^{2} -\\beta '\\left(\\bar{H}H\\right)\\left(\\bar{h}h\\right)- \\kappa \\tr\\left(\\Sigma^{4}\\right)-\\lambda \\bar{H} \\tr(\\Sigma^{2})H\n \\end{split}\n \\end{eqnarray}\n\n \n \n\\noindent where from now on we set the reduced Planck mass to unity, $M_{Pl}=1$. We focus on the dynamics of inflation during the first symmetry breaking stages at high energy scales. For this reason we ignore all the terms involving the bi-doubled since this state mostly contribute in low energies by ginving mass to the MSSM particles and do not play an important r\\^ole during inflation. In addition we impose a $Z_{2}$ symmetry, under which $\\Sigma$ is odd and all the other fields are even. As a result the trilinear terms $\\bar{H}\\Sigma H$ and $\\tr\\left(\\Sigma^{3}\\right)$ are eliminated from the superpotential in (\\ref{wscalar}). The elimination of these trilinear terms of the superpotential is important, since if we use $\\bar{H}\\Sigma H$ and $\\tr\\left(\\Sigma^{3}\\right)$ instead of $\\bar{H} \\tr(\\Sigma^{2})H$ and $\\tr\\left(\\Sigma^{4}\\right)$, the shape of the resulting potential is not appropriate and it leads to inconsistent results with respect to the cosmological bounds while at the same time returns a low scale value for the parameter $M$ in the superpotential, which usually expected to be close to the GUT scale. Then, using (\\ref{Adj}) and (\\ref{Hbreaking}) the superpotential takes the following form: \n \n \\begin{eqnarray}\n \\begin{split}\\label{superpotential2}\n W &\\supset \\left(M-\\frac{\\tilde{\\lambda}}{9}S^{2}\\right)\\ov{Q}_{H}Q_{H}+\\left(M-\\tilde{\\lambda}S^{2}\\right)\\ov{L}_{H}L_{H}-\\alpha (\\ov{Q}_{H}Q_{H}+\\ov{L}_{H}L_{H})^{2}+mS^{2}-\\tilde{\\kappa}S^{4}\\\\\n & \\quad\\qquad\n \\end{split}\n \\end{eqnarray}\n \n \\noindent where $\\tilde{\\lambda}=\\frac{3\\lambda}{4}$ and $\\tilde{\\kappa}=\\frac{7\\kappa}{12}$. From the phenomenological point of view we expect $\\langle{S}\\rangle=v$ to be at the GUT scale. By assuming $v\\simeq{3\\times{10^{16}}}$GeV and using the minimization condition $\\partial{W}\/\\partial{S}=0$, we estimate that $m\\simeq{2\\tilde{\\kappa}v^{2}}$ which, for $\\tilde{\\kappa}=1\/2$, gives $m\\sim{10^{14}}$ GeV.\n \n In the two step breaking pattern that we consider here, $\\ov{L}_{H}$ and $L_H$ must remain massless at this scale in order to break the $SU(2)_R$ symmetry at a lower scale. The $SU(2)_{R}$ breaking scale should not be much lower than the GUT scale in order to have a realistic heavy Majorana neutrino scenario. In addition we have to ensure that the coloured triplets $\\ov{Q}_{H}$ and $Q_{H}$ will be heavy. In order to keep the $\\ov{L}_{H}$, $L_H$ doublets at a lower scale, and at the same time the coloured fields $\\ov{Q}_{H}$ and $Q_{H}$ to be heavy, we assume that $M\\thickapprox{\\tilde{\\lambda}\\langle{S}\\rangle^{2}}=\\tilde{\\lambda}\\upsilon^{2}$. In this case $\\ov{Q}_{H}$, $Q_{H}$ acquire GUT scale masses $M_{Q_{H}}\\thickapprox{\\frac{8\\tilde{\\lambda}}{9}\\langle{S}\\rangle^2}$.\n \n \n During inflation the colored triplets $\\ov{Q}_{H}$, $Q_{H}$ and the charged components of the RH doublets, $\\ov{L}_{H}$ and $L_H$, do not play an important r\\^ole. The $SU(2)_R$ symmetry breaks via the neutral components\\footnote{Here and for the rest of the paper, for shorthand we remove the subscript \"c\" on the fields, i.e: $\\ov{\\nu}^{c}_{H}$, $\\nu^{c}_{H}\\rightarrow{\\ov{\\nu}_{H}, \\nu_{H}}$.} $\\ov{\\nu}_{H}$ and $\\nu_{H}$. In terms of these states the superpotential reads:\n \n \\begin{eqnarray}\n \\begin{split}\n W= \\tilde{\\lambda}\\left(\\upsilon^{2} -S^{2}\\right)\\ov{\\nu}_{H}\\nu_{H}-\\alpha (\\ov{\\nu}_{H}\\nu_{H})^{2}+mS^{2}-\\tilde{\\kappa}S^{4}\n \\end{split}\n \\end{eqnarray}\n \n\\noindent where we have made use of the relation $M\\simeq\\tilde{\\lambda}\\upsilon^{2}$.\n\n\n\\noindent The K\\\"{a}hler potential has a no-scale structure and is a hermitian function of the fields and their conjugates. For the present \n analysis, we will consider the dependence of the Higgs fields of the 4-2-2 gauge group and the `volume' modulus $T$. \nTherefore, assuming the fields $\\phi_i=(S, T, H, h)$ and their complex conjugates, we write\n\\begin{equation}\\label{kahler1}\n\\begin{split}\nK = -3 \\log \\left[T + T^{\\ast}- \\frac{1}{3}\\left(H H^{\\ast} + \\bar{H} \\bar{H}^{\\ast} +\\tr\\Sigma^{\\dagger}\\Sigma\\right) +\\frac{\\xi}{3}\\left(H \\bar{H} + H^{\\ast} \\bar{H}^{\\ast}\\right)+ \\frac{\\zeta}{3}\\left(h h^{\\ast}+\\bar{h} \\bar{h}^{\\ast}\\right)\\right]\n\\end{split}\n\\end{equation}\n\n\\noindent where $\\xi$ is a dimensionless parameter. In the expression (\\ref{kahler1}), we can ignore the last term which involves the bidoublet and in terms of $\\nu_{H}$, $\\ov{\\nu}_{H}$ and $S$, the K\\\"{a}hler potential reads: \n\\begin{equation}\\label{kahler2}\n\\begin{split}\nK = -3 \\log \\left[T + T^{\\ast}- \\frac{1}{3}\\left(|\\nu_{H}|^{2} + |\\ov{\\nu}_{H}|^{2} +S^{2}\\right) +\\frac{\\xi}{3}\\left(\\ov{\\nu}_{H}\\nu_{H} +(\\ov{\\nu}_{H})^{\\ast}(\\nu_{H})^{\\ast} \\right)\\right].\n\\end{split}\n\\end{equation}\nIn order to determine the effective potential we define the function\n\\[ G= K+\\log|W|^2\\equiv K+\\log W+\\log W^*.\n\\]\nThen the effective potential is given by \n\\ba \nV=e^G\\left(G_iG_{i j^*}^{-1}G_{j^*}-3\\right)+V_D\\label{VGK}\n\\ea \nwhere $G_i (G_{j^*})$ is the derivative with respect to the field $\\phi_i\n(\\phi^*_j)$ \nand the indices $i,j$ run over the various fields. $V_D$ stands for the D-term contribution. \n\n\\noindent Computing the derivatives and substituting\tin (\\ref{VGK}) the potential takes the form \n\t\t\n\\begin{eqnarray}\\label{fullpotential}\n\\begin{split}\nV[\\ov{\\nu}_{H},\\nu_{H},S]&=\\frac{9}{(-3+\\nu_{H}^{2}+\\ov{\\nu}_{H}^{2}+S^{2}-2\\xi\\ov{\\nu}_{H}\\nu_{H})^{2}}\\left[(\\tilde{\\lambda}\\upsilon^{2}-2\\alpha \\nu_{H}\\ov{\\nu}_{H})^{2}(\\nu_{H}^{2}+\\ov{\\nu}_{H}^{2})-8\\tilde{\\lambda}mS^{2}\\ov{\\nu}_{H}\\nu_{H}\\right.\\\\\n&-2\\tilde{\\lambda} S^{2}(\\tilde{\\lambda}\\upsilon^{2}-2\\alpha \\nu_{H}\\ov{\\nu}_{H})(\\nu_{H}^{2}+\\ov{\\nu}_{H}^{2})+4\\tilde{\\lambda}^{2}S^{2}(\\ov{\\nu}_{H}\\nu_{H})^{2}\\\\\n&\\left. +4m^{2}S^{2}-16\\tilde{\\kappa}S^{4}(m-\\tilde{\\lambda}\\ov{\\nu}_{H}\\nu_{H})+\\tilde{\\lambda}^{2}S^{4}(\\nu_{H}^{2}+\\ov{\\nu}_{H}^{2})+16\\tilde{\\kappa}^{2}S^{6}\\right]\n\\end{split}\n\\end{eqnarray}\n\n\\noindent where we have ignored the D-term contribution and we have assumed that the value of the $T$ modulus field is stabilized at $\\langle{T}\\rangle=\\langle{T^{*}}\\rangle=1\/2$, see \\cite{Cicoli:2013rwa, Ellis:2013nxa}. Notice that in the absence of the Higgs contributions in the K\\\"ahler \npotential, the effective potential is exactly zero, $ V=0$ due to the well known property of the no-scale structure. \n\n\n We are going now to investigate two different inflationary cases: firstly, along H-direction and secondly along S-direction.\n\n\\subsection{INFLATION ALONG $H$-DIRECTION}\n\n We proceed by parametrizing the neutral components of the $L_{H}$ and $\\ov{L}_{H}$ fields as $\\nu_H=\\dfrac{1}{2}\\left(X+Y\\right)e^{i\\theta}$ and $\\ov{\\nu}_H=\\dfrac{1}{2}\\left(X-Y \\right) e^{i\\varphi}$, respectively. These yield\n\n\\begin{equation}\nX = \\mid \\nu_H \\mid + \\mid \\bar\\nu_{H} \\mid,\n \\qquad Y = \\mid \\nu_H \\mid - \\mid \\bar\\nu_{H} \\mid~\\cdot\n\\end{equation}\n\n\n\\noindent \nAssuming $\\theta=0$ and $\\varphi=0$, along the D-flat direction, $Y=0$, and the combination $X$ is identified with the inflaton. The shape of the potential, as a function of the fields $S$ and $X$, is presented in Figure \\ref{3Dplots}. In order to avoid singularities from the denominator we have assume a condition which is described in the following. \n\n\n\\begin{figure}[t!]\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=.95\\linewidth]{3Dplot_1.pdf}\n \t\n \t\t\\label{3d1}\n \t\\end{subfigure}%\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=.95\\linewidth]{3Dplot_2.pdf}\n \t\n \t\t\\label{3d2}\n \t\\end{subfigure}\t\n \t\\caption{\\small{Plots of the potential as a function of $S$ and $X$ and for appropriate values of the other parameters. The plot on the right displays a close-up view of the region with small values for $X$ and $S$. }}\n \t\\label{3Dplots}\n \\end{figure}\n\n\n\n\n The potential along the $S=0$ direction is:\n\\begin{equation}\\label{potential_3_6}\nV\\left(X\\right) =\\frac{\\tilde{\\lambda}^{2}\\upsilon^{4}X^{2}\\left(1-\\frac{\\alpha X^{2}}{2 \\tilde{\\lambda}\\upsilon^{2}}\\right)^{2}}{2\\left(1-\\left(\\frac{1-\\xi}{6}\\right)X^{2}\\right)^{2}} .\n\\end{equation}\n\nThe shape of the $V(X,S)$ scalar potential presented in Figure \\ref{3Dplots} along with the inflaton trajectory description and the simplified form in (\\ref{potential_3_6}) is similar with the one presented in \\cite{Ellis:2014dxa,Ellis:2016spb}. As it is usually the case in no-scale supergravity, the effective potential displays a singularity when the denominator vanishes. The presence of these singularities lead to an exponentially steep potential which can cause violation of the basic slow-roll conditions (i.e. $\\varepsilon\\ll{1}$, $|\\eta|\\ll{1}$). Consequently, these singularities must be removed. In our specific model described by the potential \\eqref{potential_3_6},\n we first notice that for the special value $\\xi=1$ the potential is free from singularities. For generic values of $\\xi$ however, i.e. $\\xi\\ne 1$, the potential displays a singularity for $X=\\sqrt{\\frac{6}{1-\\xi}}$. In order to remove the zeros of the denominator in \\eqref{potential_3_6}, we assume the following condition \\cite{Ellis:2014dxa},\n\\ba\n\\alpha=\\frac{\\left(1-\\xi\\right)\\tilde{\\lambda}\\upsilon^{2}}{3}~\\cdot\\label{singularitycondition}\n\\ea \n\n\n\\noindent This is a strong assumption which relates parameters with different origins. Indeed, $\\alpha$ is a superpotential parameter while $\\xi$ descents from the Kahler potential. Since in our specific model the condition \\eqref{singularitycondition} lacks an explanation from first principles, it will be reasonable in the subsequent analysis to study the effects of a slightly relaxed version of \\eqref{singularitycondition}. This can be achieved by introducing a small parameter $\\delta$ (with $\\delta\\ll{1}$) and modifying the condition as follows,\n\n\\ba\n\\label{singularitycondition2}\n\\alpha=\\frac{\\left(1-\\xi+\\delta\\right)\\tilde{\\lambda}\\upsilon^{2}}{3}~\\cdot \\label{dsingularitycondition}\n\\ea\n\n\n\n\\noindent In the remaining of this section, we are going to study the potential for special $\\xi$ values using the conditions~(\\ref{singularitycondition}) and (\\ref{dsingularitycondition}). \n\n\n\nWe will start by analysing some special cases first. By imposing (\\ref{singularitycondition}), which means $\\delta=0$ \nthe scalar potential simplifies to a quadratic monomial,\n\n\\begin{equation}\\label{quadraticform}\nV\\left(X\\right) = \\frac{\\tilde{\\lambda}^{2}\\upsilon^{4}}{2}X^{2}\n\\end{equation}\n\\noindent something that can be also seen from the plots in Figure \\ref{3Dplots}, where for small values of S (along the $S=0$ direction) the potential receives a quadratic shape form. The equation (\\ref{quadraticform}) shows the potential of a chaotic inflation scenario. However, at this stage,\n the inflaton field $X$ is not canonically normalized since its kinetic energy terms take the following form\n\\begin{equation}\n\\begin{split}\n\\mathcal{L}\\left(X\\right)= \\frac{ 1-\\frac{\\xi}{6}\\left(1-\\xi\\right) X^{2}}{2\\left(1-\\frac{1}{6}\\left(1-\\xi\\right) X^{2}\\right)^{2}} \\left(\\partial X \\right)^{2} -\\frac{\\tilde{\\lambda}^{2}\\upsilon^{4}}{2}X^{2} .\n\\end{split}\n\\end{equation}\nWe introduce a canonically normalized field $\\chi$ satisfying \n\\begin{equation}\n\\begin{split}\n\\left(\\frac{d\\chi}{dX}\\right)^{2} = \\frac{ 1-\\frac{\\xi}{6}\\left(1-\\xi\\right) X^{2}}{\\left(1-\\frac{1}{6}\\left(1-\\xi\\right) X^{2}\\right)^{2}}.\n\\end{split}\n\\end{equation}\nAfter integrating, we obtain the canonically normalized field $\\chi$ as a function of $X$\n\n\\begin{equation}\\label{hfield}\n\\chi =\\sqrt{6}\\tanh^{-1}\\left(\\frac{\\left(1 - \\xi\\right)X}{\\sqrt{6\\left(1-\\frac{\\xi\\left(1-\\xi\\right)X^{2}}{6} \\right)}}\\right)\n-\\sqrt{\\frac{6 \\xi}{1-\\xi}}\\sin^{-1}\\left(\\sqrt{\\xi \\left(\\frac{1-\\xi}{6}\\right)}X\\right).\n\\end{equation}\n\n \\noindent Next, we investigate the implications of equation (\\ref{hfield}) by considering two different cases, for $\\xi=0$ and $\\xi\\neq{0}$. \n\n$\\bullet$ For $\\xi=0$ we have $X=\\sqrt{6}\\tanh\\left(\\frac{\\chi}{\\sqrt{6}}\\right)$ and the potential becomes,\n\n\\begin{equation}\\label{Tpotential}\nV= 3 \\tilde{\\lambda}^{2}\\upsilon^{4} \\tanh^{2}\\left(\\frac{\\chi}{\\sqrt{6}}\\right),\n\\end{equation}\n\\noindent which is analogous to the conformal chaotic inflation model (or T-Model) \\cite{Kallosh:2013xya}.\n In these particular type of models the potential has the general form:\n\n\n\n\n\\be\\label{Tmodels}\n V(\\chi)=\\uplambda^{n}\\tanh^{2n}\\left(\\frac{\\chi}{\\sqrt{6}}\\right) \\quad\\text{where}\\quad n=1,2,3,...\n \\ee\n As we can see, for $n=1$ we receive our result in (\\ref{Tpotential}) with $\\uplambda=3\\tilde{\\lambda}^{2}\\upsilon^{4}$. This potential can be further reduced to subcases depending upon the value of $\\chi$. For $\\chi\\geqslant1$ the potential in equation (\\ref{Tpotential}) reduces to Starobinsky model \\cite{Starobinsky:1980te}. In this case the inflationary observables have values $\\left(n_{s},r\\right)\\approx \\left(0.967,0.003\\right)$ and the tree level prediction for $\\xi=0$ is consistent with the latest {Planck} bounds \\cite{Ade:2015lrj}. This type of models will be further analysed in the next section where inflation along the $S$-direction is discussed. \\\\\n \n\n$\\bullet$ The particular case of $\\xi=1$ implies a quadratic chaotic inflation and the tree-level inflationary prediction $\\left(n_{s},r\\right)\\approx \\left(0.967,0.130\\right)$ is ruled out according to the latest \\emph{Planck} $2015$ results. For $0<\\xi<1$ , the prediction for $\\left(n_{s},r\\right)$, can be\n worked out numerically. \n\nAfter this analysis we turn our attention to a numerical calculation. In our numerical analysis we imply the modified condition (\\ref{singularitycondition2}) were as mentioned previously a small varying parameter $\\delta$ has been introduced in order to soften the strict assumption \\eqref{singularitycondition}. By substitute the relaxed condition \\eqref{singularitycondition2} in \\eqref{potential_3_6} and neglecting $\\mathcal{O}(\\delta^{2})$, the potential receives the following form:\n\n\n\n\\begin{equation}\\label{potentiladelta}\nV(X)\\simeq{\\frac{\\tilde{\\lambda}^{2}\\upsilon^{4}}{2}X^{2}}\\left(1-\\frac{2\\delta X^{2}}{6+(\\xi-1)X^{2}}\\right).\n\\end{equation}\n\n\\noindent As we observe the first term in the above relation is the quadratic potential \\eqref{quadraticform}, while the second term encodes the effects of the small parameter $\\delta$. In addition, we note that the order of the singularity enhancement have been improved in comparison with the initial potential \\eqref{potential_3_6}. Next we present our numerical results where the r\\^ole of the parameter $\\delta$ is also discussed.\n\n\n\n\n\n\\subsection{NUMERICAL ANALYSIS }\n\nBefore presenting numerical predictions of the model it is useful to briefly review here the basic results of the slow roll assumption. The inflationary slow roll parameters are given by \\cite{DeSimone:2008ei, Okada:2010jf}:\n\\begin{equation}\n\\epsilon=\\dfrac{1}{2}\\left(\\frac{V^{\\prime}\\left(X\\right)}{V(X)\\chi^{\\prime}\\left(X\\right)}\\right)^{2} \\quad{,}\\quad \\eta=\\left(\\frac{V^{\\prime\\prime}\\left(X\\right)}{V(X)\\left(\\chi^{\\prime}\\left(X\\right)\\right)^{2}}-\\frac{V^{\\prime}\\left(X\\right)\\chi^{\\prime\\prime}\\left(X\\right)}{V(X)\\left(\\chi^{\\prime}\\left(X\\right)\\right)^{3}}\\right).\n\\end{equation}\nThe third slow-roll parameter is,\n\\begin{equation}\n \\varsigma^{2}=\\left(\\frac{V^{\\prime}\\left(X\\right)}{V(X)\\chi^{\\prime}\\left(X\\right)}\\right)\\left(\\frac{V^{\\prime\\prime\\prime}\\left(X\\right)}{V(X)\\left(\\chi^{\\prime}\\left(X\\right)\\right)^{3}}-3\\frac{V^{\\prime\\prime}\\left(X\\right)\\chi^{\\prime\\prime}\\left(X\\right)}{V(X)\\left(\\chi^{\\prime}\\left(X\\right)\\right)^{4}}+3\\frac{V^{\\prime}\\left(X\\right)\\left(\\chi^{\\prime\\prime}\\left(X\\right)\\right)^{2}}{V(X)\\left(\\chi^{\\prime}\\left(X\\right)\\right)^{5}}-\\frac{V^{\\prime}\\left(X\\right)\\chi^{\\prime\\prime\\prime}\\left(X\\right)}{V(X)\\left(\\chi^{\\prime}\\left(X\\right)\\right)^{4}}\\right)\n\\end{equation}\n\\noindent where a prime denotes a derivative with respect to $X$. The slow-roll approximation is valid as long as the conditions $\\epsilon\\ll1$,$\\mid \\eta\\mid\\ll1$ and $\\varsigma^{2}\\ll1$\n hold true. In this scenario the tensor-to-scalar ratio $r$, the scalar spectral index $n_{s}$ and the running of the spectral index $\\frac{dn_{s}}{d\\ln k}$ are given by\n \\begin{equation}\nr\\simeq16 \\epsilon \\quad{,}\\quad n_{s}\\simeq 1+2\\eta-6\\epsilon \\quad{,}\\quad \\frac{dn_{s}}{d\\ln k}\\simeq 16\\epsilon\\eta-24\\epsilon^{2}+2\\varsigma^{2}.\n \\end{equation}\n The number of e-folds is given by,\n \\begin{equation}\n N_{l}=\\int_{X_{e}}^{X_{l}}\\left(\\frac{V\\left(X\\right)\\chi^{\\prime}\\left(X\\right)}{V^{\\prime}(X)}\\right) dX,\n \\end{equation}\n\\noindent where $l$ is the comoving scale after crossing the horizon, $X_{l}$ is the field value at the comoving scale and $X_{e}$ is the field when inflation ends, i.e $max\\left(\\epsilon\\left(X_{e}\\right),\\eta\\left(X_{e}\\right),\\varsigma\\left(X_{e}\\right)\\right)=1.$\\\\\nFinally, the amplitude of the curvature perturbation $\\Delta_{R}$\n is given by:\n \\begin{equation}\n\\Delta_{R}^{2}=\\frac{V\\left(X\\right)}{24 \\pi^{2} \\epsilon\\left(X\\right)}.\n \\end{equation}\n \n \n Focusing now on the numerical analysis, we see that we have to deal with three parameters: $\\xi, \\delta$ and $\\tilde{\\lambda}$. We took the number of e-folds ($N$) to be 60, and in Figure \\ref{ns_vs_r_plots} we present two different cases in the $n_{s}-r$ plane, along with the Planck measurements (\\emph{Planck} TT,TE,EE+lowP) \\cite{Ade:2015lrj}. Specifically, in Figure $1(a)$, we fixed $\\xi$ and vary $\\tilde{\\lambda}$ and $\\delta$. The various colored (dashed) lines corresponds to different fixed $\\xi$-values. The green line corresponds to the limiting case with $\\xi=1$ and as we observe the results are more consistent with the Plank bounds (black solid contours) as the value of $\\xi$ decreases. Similar, in Figure $1(b)$ we treat $\\delta$ as a fixed parameter while we vary $\\xi$ and $\\tilde{\\lambda}$. Also, in this case, we observe that for a significant region of the parameter space the solutions are in good agreement with the observed cosmological bounds. The green curve here corresponds to $\\delta=10^{-6}$. The special case with $\\delta=10^{-6}\\sim 0$ and $\\xi=1$ is represented by the black dot and as we discussed earlier is ruled out from the recent cosmological bounds. We observe from the plot that, as $\\xi$ approaches to unity the splitting between the curves due to different values of $\\delta$ is small and the solution converges to $\\delta\\sim{0}$ case. However, as we decrease the values of $\\xi$ we have splitting of the curves and better agreement with the cosmological bounds. Finally in plots 1(c) and 1(d) we present values of the running of the spectral index with respect to $n_{s}$. We observe that the running of the spectral index, approximately receives values in the range $-5\\times{10^{-4}}<\\frac{dn_{S}}{d\\ln{k}} <5\\times{10^{-4}}$.\n\n \n \n \n \\begin{figure}[t!]\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=.9\\linewidth]{ns_r_fixed_xi.pdf}\n \t\t\\caption{\\small{r vs $n_{s}$ for fixed values of $\\xi$}}\n \t\t\\label{ns_r_fixed_xi}\n \t\\end{subfigure}%\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=.9\\linewidth]{ns_r_fixed_delta.pdf}\n \t\t\\caption{\\small{$n_{s}$ vs r for fixed values of $\\delta$}}\n \t\t\\label{ns_r_fixed_delta}\n \t\\end{subfigure}\t\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=.9\\linewidth]{ns_dns_fixed_xi.pdf}\n \t\t\\caption{\\small{$\\frac{dn_{S}}{d\\ln{k}}$ vs $n_{s}$ for fixed values of $\\xi$}}\n \t\t\\label{ns_dns_fixed_xi}\n \t\\end{subfigure}%\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=.9\\linewidth]{ns_dns_fixed_delta.pdf}\n \t\t\\caption{\\small{$\\frac{dn_{S}}{d\\ln{k}}$ vs $n_{s}$ for fixed values of $\\delta$}}\n \t\t\\label{ns_dns_fixed_delta}\n \t\\end{subfigure}%\n \t\\caption{\\small{The inflationary predictions ($r$-$n_{s}$) and ($\\frac{dn_{s}}{d\\ln{k}}-n_{s}$) of the model by varying the various parameters involved in to the analysis. In all cases we took the number of e-folds, $N=60$. In plots (a) and (b) black solid contours represents the Planck constraints (\\emph{Planck} TT,TE,EE+lowP) at $68\\%$ (inner) and $95\\%$ (outer) confidence level \\cite{Ade:2015lrj}. In plots (a) and (c) we keep $\\xi$ constant for each curve and vary $\\tilde{\\lambda}$ and $\\delta$. While in plots (b) and (d) for each curve we fixed $\\delta$ and vary $\\tilde{\\lambda}$ and $\\xi$. The black dot solution corresponds to $\\xi=1$. }}\n \t\\label{ns_vs_r_plots}\n \\end{figure}\n \n \n \n Next we present additional plots to better clarify the r\\^ole of the various parameters involved in the analysis.\n \n Firstly, we study the spectral index $n_{s}$ as a function of the various parameters. The results are presented in Figure \\ref{ns_plots}. In plots (a) and (b) we consider the cases with fixed values for $\\xi$ and $\\delta$ respectively, and we take variations for $\\tilde{\\lambda}$. We vary the parameter $\\xi$ \n in the range $\\xi\\sim{[0.92,1]}$ with the most preferable solutions for $\\xi\\simeq[0.96, 1]$. In addition the two plots suggest that acceptable solutions \n are found in the range $\\tilde{\\lambda}\\sim[10^{-2},10^{-1}]$. In plots (c) and (d) $n_s$ is depicted in terms of $\\delta$ and $\\xi$ respectively. As we expected the dependence on $\\delta$ is negligible when it receives very small values, since we observe from plot 3(c) that the various curves are almost constant for very small $\\delta$ values. The results are become more sensitive on $\\delta$ as we decrease the value of $\\xi$. This behaviour can also be confirmed from the potential \\eqref{potentiladelta}. As we can see for $\\xi\\sim{1}$ the second term is simplified and the potential receives a chaotic like form. In this case the effects of small $\\delta$ in the observables are almost negligible (green line). However as we decrease the value of $\\xi$ and we increase the values of $\\delta$ the second term becomes important and contributes to the results.\n\n\\begin{figure}[t!]\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\\includegraphics[width=.9\\linewidth]{ns_loglam_fixed_xi.pdf}\n\t\t\\caption{\\small{$n_{S}$ vs $\\log{\\tilde{\\lambda}}$}}\n\t\t\\label{ns_lam_fixed_xi}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\\includegraphics[width=.9\\linewidth]{ns_loglam_fixed_delta.pdf}\n\t\t\\caption{\\small{$n_{S}$ vs $\\log{\\tilde{\\lambda}}$}}\n\t\t\\label{ns_lam_fixed_delta}\n\t\\end{subfigure}\n\t\t\\medspace\\\\\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\\includegraphics[width=.9\\linewidth]{ns_logdelta.pdf}\n\t\t\\caption{\\small{$n_{S}$ vs $\\log\\delta$}}\n\t\t\\label{ns_delta}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{ns_xi.pdf}\n\t\t\\caption{\\small{$n_{S}$ vs $\\xi$}}\n\t\t\\label{ns_xi}\n\t\\end{subfigure}%\n\t\\caption{\\small{Plots (a) and (c) shows how $n_{S} $ depends on $\\log{\\tilde{\\lambda}}$ and $\\log{\\delta}$ respectively. For each curve in Plots (a),(c) we fixed the value of $\\xi$ and vary $\\tilde{\\lambda}$ and $\\delta$. Similarly, Plots (b) and (d), shows $n_{s} $ vs $\\log{(\\tilde{\\lambda})}$ and $n_{S} $ vs $\\xi$ respectively. In Plots (b) and (d) the value of $\\delta$ is fixed while we vary the other parameters.}}\n\t\\label{ns_plots}\n\\end{figure}\n\n\n\\noindent\n\n\n\n\nNext, in Figure \\ref{r_plots} we consider various cases for the tensor to scalar ratio, r. The description of the plots follows the spirit of those presented in Figure \\ref{ns_plots} for the spectral index $n_{S}$. In particular, by comparing the plots 4(c) and 3(c) we notice that the dependence of $r$ on $\\delta$ is weaker in comparison with $n_{S}$. Thus the relaxation parameter $\\delta$ strongly affects the spectral index $n_{S}$ while for $\\delta<10^{-4}$ and fixed $\\xi$ the tensor-scalar ratio $r$ remains almost constant. In summary from the various figures presented so far we observe that consistent solutions can be found in a wide range of the parameter space. We also note that the model predicts solutions with $r\\leq{0.02}$, which is a prediction that can be tested with the discovery of primordial gravity waves and with bounds of future experiments. \n\n \\begin{figure}[t!]\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\\includegraphics[width=.95\\linewidth]{r_loglam_fixed_xi.pdf}\n \t\\caption{ $r$ vs $\\log{\\tilde{\\lambda}}$}\n \t\\label{r_lam_fixed_xi}\n \t\\end{subfigure}%\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\\includegraphics[width=.95\\linewidth]{r_loglam_fixed_delta.pdf}\n \t\\caption{ $r$ vs $\\log{\\tilde{\\lambda}}$}\n \t\\label{r_lam_fixed_delta}\n \t\\end{subfigure}%\n \t\\medspace\\\\\t\n \t \t\\begin{subfigure}{.5\\textwidth}\n \t \t\t\\centering\n \t \t\t\\includegraphics[width=.95\\linewidth]{r_logdelta.pdf}\n \t \t\t\\caption{ $r$ vs $\\log\\delta$}\n \t \t\t\\label{r_delta}\n \t \t\\end{subfigure}%\n \t \t\\begin{subfigure}{.5\\textwidth}\n \t \t\t\\centering\n \t \t\t\\includegraphics[width=.95\\linewidth]{r_xi.pdf}\n \t \t\t\\caption{ $r$ vs $\\xi$}\n \t \t\t\\label{r_xi}\n \t \t\\end{subfigure}\n \t\\caption{\\small{Plots (a) and (c) shows $r$ vs $\\log{\\tilde{\\lambda}}$ and $r$ vs $\\log{\\delta}$ respectively. For each curve in Plots (a) and (c) we fixed $\\xi$ and vary $\\tilde{\\lambda}$ and $\\delta$. Similar, in Plots (b) and (d) we present $r$ vs $\\log{\\tilde{\\lambda}}$ and $r$ vs $\\xi$. For each curve in these plots we fixed the value of $\\delta$ and vary $\\tilde{\\lambda}$ and $\\xi$.}}\n \t\\label{r_plots}\n \\end{figure}\n\n\nRegarding the superpotential parameter $\\tilde{\\lambda}$, we can see from the various plots that its value must be within the range $\\tilde{\\lambda}\\sim{[10^{-2}, 10^{-1}}]$. Using this range of values for $\\tilde{\\lambda}$ and the fact that, $M_{Q_{H}}\\approx{\\frac{8\\tilde{\\lambda}}{9}\\upsilon^{2}}$, with $\\upsilon\\simeq{10^{-2}}$ in $M_{Pl}=1$ units we conclude that : $M_{Q_{H}}\\sim{[0.217, 2.17]\\times{10^{13}}}$ GeV. The fact that the mass value is small compare to the $\\mathcal{O}(M_{GUT})$ scale, can create tension with other phenomenological predictions of the model, like unification of gauge couplings. On the other hand, as already mentioned , $Q_{H},\\ov{Q}_{H}$ triplet fields can be mixed with the triplets $D_{3},\\ov{D}_{3}$ contained in the sextet $D_{6}$, something that is possible to lead in a significant lift to the mass value of the extra triplet fields. \n\n\nIt is also interesting to investigate the values of the Hubble parameter during inflation $H_{inf}$ in the model. In the slow-roll limit the Hubble parameter it depends on the value of $X$:\n\n\n\\begin{equation}\nH_{inf}^{2}=\\frac{V(X)}{3M_{Pl}^{2}}\n\\end{equation}\n\n\\noindent and we evaluate it at the pivot scale. In Figure \\ref{Hinf_ns_plots} we show the values of the Hubble parameter in the ($H_{inf}-n_{s}$) plane. We observe that the values of the Hubble parameter with respect to $n_{s}$ bounds are of order $10^{13}$ GeV.\n\n\n\\begin{figure}[t!]\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=.9\\linewidth]{Hinf_ns_fixed_xi.pdf}\n \t\t\\caption{\\small{$H_{inf}$ vs $n_{s}$ for fixed values of $\\xi$}}\n \t\t\\label{Hinf_ns_fixed_xi}\n \t\\end{subfigure}%\n \t\\begin{subfigure}{.5\\textwidth}\n \t\t\\centering\n \t\t\\includegraphics[width=.9\\linewidth]{Hinf_ns_fixed_delta.pdf}\n \t\t\\caption{\\small{$H_{inf}$ vs $n_{s}$ for fixed values of $\\delta$}}\n \t\t\\label{ns_r_fixed_delta}\n \t\\end{subfigure}\t\n \t\\caption{\\small{Plots showing the values (in GeV) of the Hubble parameter with respect to the scalar spectral index $n_{s}$. For acceptable $n_{s}$ values we see that the Hubble parameter receives values of order~$10^{13}-10^{14}$ GeV.}}\n \t\\label{Hinf_ns_plots}\n \\end{figure}\n\n\n\n\n\n\\subsection{REHEATING}\n\nAs already have been discussed in Section 2, the quarks and leptons in the 4-2-2 model are unified under the representations $F_{i}=(4,2,1)$ and $\\bar{F}_{i}=(\\bar{4},1,2)$, where $i=1,2,3$ denote the families and the RH-neutrinos are contained in the $\\bar{F}$ representation. A heavy Majorana mass for the RH-neutrinos can be realized from the following non-renormalisable term \n\n\\be \\label{majorana}\nM_{\\nu^c} \\nu^c\\nu^c\\approx \n \\gamma\\frac{\\bar{F}\\bar{F}\\bar{H}\\bar{H}}{M_{*}}\n \\ee\n \n \n \n\\noindent where we have suppressed generation indices for simplicity, $\\gamma$ is a coupling constant and $M_{*}$ represents a high cut-off scale (for example the compactification scale in a string model or the Planck scale $M_{Pl}$). In terms of $SO(10)$ GUTs this operator descent from the following invariant operator \n\n\n\\[ 16_{F}16_{F}\\bar{16}_{H}\\bar{16}_{H}\\] \n\n \\noindent and as described in \\cite{Leontaris:2016jty} can be used to explain the reheating process of the universe after the end of inflation. In our case the 4-2-2 symmetry breaking occur in two steps: first $G_{PS}\\xrightarrow{\\langle{S}\\rangle}G_{L-R}$ and then $G_{L-R}\\xrightarrow{\\langle{\\nu_{H}}\\rangle,\\langle{\\bar{\\nu}_{H}}\\rangle}G_{SM}$. The first breaking is achieved via the adjoint of the PS group at the GUT scale while the second breaking occurs in an intermediate scale $M_{R}$. After the breaking of the L-R symmetry, the high order term in (\\ref{majorana}) gives the following Majorana mass term for the RH neutrinos\n \n\\be \n \\gamma\\frac{\\langle{\\nu_{H}}\\rangle^{2}}{M_Pl}\\nu^{c}\\nu^{c}.\n \\ee\n \n \n \n\\begin{figure}[t!]\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{ns_TRH_fixed_xi.pdf}\n\t\t\\caption{}\n\t\t\\label{ns_Trh_fixed_xi01}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{ns_TRH_fixed_delta.pdf}\n\t\t\\caption{ }\n\t\t\\label{ns_Trh_fixed_xi05}\n\t\\end{subfigure}%\n\t\\medspace\\\\\t\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{r_TRH_fixed_xi.pdf}\n\t\t\\caption{}\n\t\t\\label{r_Trh_fixed_xi}\n\t\\end{subfigure}%\n\t\\begin{subfigure}{.5\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=.9\\linewidth]{r_TRH_fixed_delta.pdf}\n\t\t\\caption{ }\n\t\t\\label{r_Trh_fixed_delta}\n\t\\end{subfigure}\n\t\\caption{\\small{Plots (a) and (b) shows solutions in the $n_{s}-T_{RH}$ plane by varying the various parameters of the model, while plots (c) and (d) present solutions in the $r-T_{RH}$ plane. In all the cases for the coupling constant $\\gamma$ we choose the values $\\gamma=0.1$ (solid), $\\gamma=0.5$ (dashed) and $\\gamma=1$ (dotted).}}\n\t\\label{ns_Trh_plots}\n\\end{figure} \n \n \n \n \n \n\\noindent We can see that a heavy Majorana scale scenario implies that the $SU(2)_{R}$ breaking scale should not be much lower than the $SU(4)$ scale and also $\\gamma$ should not be too small. Another important role of the higher dimensional operators is that after inflation the\ninflaton $X$ decays into RH neutrinos through them to reheat the Universe. In addition the subsequent decay of these neutrinos can explain the baryon asymmetry via leptogenesis \\cite{Fukugita:1986hr, Lazarides:1991wu}\n. For the reheating temperature, we estimate \\cite{Leontaris:2016jty} (see also \\cite{Lazarides:2001zd}) :\n\n\\begin{equation}\nT_{RH}\\sim \\sqrt{\\Gamma_{X} M_{Pl}}\n\\end{equation}\n\n \\noindent where the total decay width of the inflaton is given by\n \n \n \\be\n\\Gamma_{X}\\simeq{\\frac{1}{16\\pi}\\left(\\frac{M_{\\nu^{c}}}{M}\\right)^{2}M_{X}} \n\\ee \n\n\n\\noindent with $M_{\\nu^{c}}= \\gamma\\frac{\\langle{\\nu_{H}}\\rangle^{2}}{M_{Pl}}$ the mass of the RH neutrinos and $M_{X}$ the mass of the inflaton. The later is calculated from the effective mass matrix at the local minimum and approximately is $M_{X}=2M\\simeq{2\\tilde{\\lambda }\\upsilon^{2}}$. Since $M\\simeq{10^{13}}$GeV, the decay condition $M_{X}>M_{\\nu^{c}}$ it is always satisfied for appropriate choices of the parameters $\\langle\\nu_{H}\\rangle$ and $\\gamma$. In Figures \\ref{ns_Trh_plots} we present solutions in $n_{s}-T_{RH}$ and $r-T_{RH}$ plane with respect to the various parameters of the model. For the computation of $T_{RH}$ we assume that $\\langle\\nu_{H}\\rangle =M\\simeq{\\tilde{\\lambda}v^{2}}$ and we present the results for $\\gamma=0.1$ (solid), $\\gamma=0.5$ (dashed) and $\\gamma=1$ (dotted). In this range of $\\gamma$ values we have a Majorana mass, $M_{\\nu^{c}}\\sim{10^{6}-10^{7}}$ GeV, which decreases as we decrease the value of $\\gamma$. In addition, gravitino constraints implies a bound for the reheating temperature with $T_{RH}<10^{6}-10^{9}$ GeV and as we observe from the plots there are acceptable solutions in this range of values. More precisely, from plots (a) and (c) we see that for $\\xi>0.97$ and $\\gamma>0.5$ most of the results predict $T_{RH}>10^{9}$ GeV. However, it is clear that the consistency with the gravitino constraints strongly improves as we decrease $\\gamma$, since all the curves with $\\gamma=0.1$ (solid lines) predicts $T_{RH}\\lesssim{10^{9}}$ GeV. Similar conclusions can be derived from plots (b) and (d). In addition, from the $r-T_{RH}$ plots (c) and (d) we observe that for $T_{RH}<10^{6}-10^{9}$ there are regions in the parameter space with $r\\sim{10^{-2}-10^{-3}}$. Furthermore, we observe from plot 6(c) that the tensor-scalar ratio and the reheating temperature are decreased as we decrease the value of $\\xi$ since the curves are shift to the left and down regions of the plot.\n\n\nA sample of the results have been discussed so far is presented in Table \\ref{mastertable}. The table is organized in horizontal blocks and each block contains three sets of values. For each set in a block we change only the coupling constant $\\gamma$ ($\\gamma=1,0.5,0.1$) while we keep $\\tilde\\lambda$, $\\xi$ and $\\delta$ constant. We observe that as we decrease the values of $\\tilde{\\lambda}$ and $\\xi$ the values of the tensor to scalar ratio ($r$) and the reheating temperature ($T_{RH}$) also decreased. \n\n\n\n\n\n\n\n\\begin{table}[t]\n\t\\resizebox{\\textwidth}{!}{\n\t\\begin{tabular}{|lc|cccc|cc|ccc|c|}\n\t\t\\hline\n\t\n\t\t\n\t\t$\\frac{X_{0}}{M_{Pl}}$&$\\frac{X_{e}}{M_{Pl}}$&$\\gamma$&$\\tilde{\\lambda}$ & $\\xi$ &$ \\delta$ & $\\frac{M_{Inf}}{M_{Pl}}$ & $\\frac{M_{\\nu^{c}}}{M_{Pl}}$ & $n_{s}$&$r$&$\\frac{dn_{s}}{dln\\kappa}$&$\\log{(T_{RH}\/ GeV)}$\\\\\n\t\t\\hline\n\t\t\t15.04 &1.41& 1&0.0384&0.9936& $10^{-6}$& $1.16\\times10^{-5}$& $3.4\\times10^{-11}$& 0.968& 0.1070& $-4.7\\times{10^{-4}}$&9.83\\\\\n\t\t\t15.04 &1.41& 0.5&0.0384&0.9936& $10^{-6}$& $1.16\\times10^{-5}$& $1.7\\times10^{-11}$& 0.968& 0.1070& $-4.7\\times{10^{-4}}$&9.53\\\\\n\t\t\t15.04 &1.41& 0.1&0.0384&0.9936&$10^{-6}$& $1.16\\times10^{-5}$& $3.4\\times10^{-12}$& 0.968& 0.1070& $-4.7\\times{10^{-4}}$&8.84\\\\\n\t\t\\hline\n13.848 &1.41& 1& 0.0304&0.98& $10^{-4.61}$& $9.25\\times10^{-6}$& $2.139\\times10^{-11}$& 0.971& 0.057& $-2.87\\times{10^{-4}}$&9.683\\\\\n13.848 &1.41& 0.5& 0.0304&0.98& $10^{-4.61}$& $9.25\\times10^{-6}$& $1.07\\times10^{-11}$& 0.971& 0.057& $-2.87\\times{10^{-4}}$&9.382\\\\\n13.848&1.41& 0.1& 0.0304&0.98& $10^{-4.61}$& $9.25\\times10^{-6}$& $2.139\\times10^{-12}$& 0.971& 0.057& $-2.87\\times{10^{-4}}$&8.683\\\\\n\t\\hline\n\n\t\t\t\t\n\t\t\n\t\t\n\t\t\n\t\n\n\t\n\t\n\t\t\n\t\t\n\n\t\n\t\t12.83& 1.40&1& 0.02141& 0.97& $10^{-4.22}$& $6.5\\times10^{-6}$&\n\t\t\t$1.05\\times10^{-11}$& 0.967& 0.0238& $1.5\\times{10^{-6}}$& 9.45\\\\\n\t\t\t\n\t\t\t12.83& 1.40&0.5& 0.02141& 0.97& $10^{-4.22}$& $6.5\\times10^{-6}$&\n\t\t\t$5.29\\times10^{-12}$& 0.967& 0.0238& $1.5\\times{10^{-6}}$& 9.15\\\\\n\t\t\t\n\t\t\t\t12.83& 1.40&0.1& 0.02141& 0.97& $10^{-4.22}$& $6.5\\times10^{-6}$&\n\t\t\t\t$1.05\\times10^{-12}$& 0.967& 0.0238& $1.5\\times{10^{-6}}$& 8.45\\\\\n\t\t\t\t\\hline\n\t\t\t12.69& 1.40&1& 0.019& 0.97& $10^{-3.72}$& $5.8\\times10^{-6}$&\n\t\t\t$8.4\\times10^{-12}$& 0.958& 0.018& $2.3\\times{10^{-4}}$& 9.38\\\\\n\t\t\t\n\t\t\t12.69& 1.40&0.5& 0.019& 0.97& $10^{-3.72}$& $5.8\\times10^{-6}$&\n\t\t\t$4.2\\times10^{-12}$& 0.958& 0.018& $2.3\\times{10^{-4}}$& 9.08\\\\\n\t\t\t\n\t\t\t12.69& 1.40&0.1& 0.019& 0.97& $10^{-3.72}$& $5.8\\times10^{-6}$&\n\t\t\t$8.4\\times10^{-13}$& 0.958& 0.018& $2.3\\times{10^{-4}}$& 8.3\\\\\n\t\t\t\\hline\n\t\t\t11.85& 1.40&1& 0.0118&0.96& $10^{-4.82}$& $3.57\\times10^{-6}$& \n\t\t\t$3.2\\times10^{-12}$& 0.966& 0.0061& $5.1\\times{10^{-5}}$& 9.065\\\\\n\t\t\t\n\t\t\t\t11.85& 1.40&0.5& 0.0118&0.96& $10^{-4.82}$& $3.57\\times10^{-6}$& \n\t\t\t\t$1.6\\times10^{-12}$& 0.966& 0.0061& $5.1\\times{10^{-5}}$& 8.76\\\\\n\t\t\t\t\n\t\t\t\t\t11.85& 1.40&0.1& 0.0118&0.96& $10^{-4.82}$& $3.57\\times10^{-6}$& \n\t\t\t\t\t$3.2\\times10^{-13}$& 0.966& 0.0061& $5.1\\times{10^{-5}}$& 8.065\\\\\n\t\t\t\t\\hline\n\t\t\t11.79& 1.40& 1&0.010& 0.96 & $10^{-4.397}$& $3.13\\times10^{-6}$ &\n\t\t\t$2.5\\times10^{-12}$& 0.957& 0.0050& $2.1\\times{10^{-4}}$& 8.98\\\\\n\t\t\t\n\t\t\t\t11.79& 1.40&0.5& 0.010& 0.96 & $10^{-4.397}$& $3.13\\times10^{-6}$ &\n\t\t\t\t$1.2\\times10^{-12}$& 0.957& 0.0050& $2.1\\times{10^{-4}}$& 8.67\\\\\n\t\t\t\n\t\t\t\t11.79& 1.40&0.1& 0.010& 0.96 & $10^{-4.397}$& $3.13\\times10^{-6}$ &\n\t\t\t\t$2.5\\times10^{-13}$& 0.957& 0.0050& $2.1\\times{10^{-4}}$& 7.97\\\\\n\t\t\t\n\t\t\\hline\n\t\t\t11.64& 1.404&1&0.00891&0.958& ${10^{-4.5}}$& $2.71\\times10^{-6}$& \n\t\t\t$1.85\\times10^{-12}$& 0.957& 0.0034& $1.8\\times{10^{-4}}$& 8.89\\\\\n\t\t\t\n\t\t\t11.64& 1.404&0.5&0.00891&0.958& ${10^{-4.5}}$& $2.71\\times10^{-6}$& \n\t\t\t$9.24\\times10^{-13}$& 0.957& 0.0034& $1.8\\times{10^{-4}}$& 8.59\\\\\n\t\t\t\t11.64& 1.404&0.1&0.00891&0.958& ${10^{-4.5}}$& $2.71\\times10^{-6}$& \n\t\t\t\t$1.84\\times10^{-13}$& 0.957& 0.0034& $1.8\\times{10^{-4}}$& 7.89\\\\\n\t\t\t\\hline\n\t\t\t11.59& 1.40&1&0.0084&0.958& ${10^{-4.5}}$& $2.6\\times10^{-6}$& \n\t\t\t$1.64\\times10^{-12}$& 0.956& 0.00299& $1.9\\times{10^{-4}}$& 8.84\\\\\n\t\t\t\n\t\t\t11.59& 1.40&0.5&0.0084&0.958& ${10^{-4.5}}$& $2.6\\times10^{-6}$& \n\t\t\t$8.2\\times10^{-13}$& 0.956& 0.00299& $1.9\\times{10^{-4}}$& 8.54\\\\\n\t\t\t\t11.59& 1.40&0.1&0.0084&0.958& ${10^{-4.5}}$& $2.6\\times10^{-6}$& \n\t\t\t\t$1.64\\times10^{-13}$& 0.956& 0.00299& $1.9\\times{10^{-4}}$& 7.84\\\\\n\t\t\t\\hline\n\t\\end{tabular}}\n\t\n\t\\caption{ \\small{Inflationary predictions of the model for various values of $\\tilde{\\lambda}$, $\\xi$, $\\delta$ and $\\gamma$. The number of e-folds is taken to be $N=60$.}}\n\t\\label{mastertable}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\\subsection{INFLATION ALONG S DIRECTION}\n\nHere we briefly discussed the case where the $S$ field has the r\\^ole of the inflaton. In the potential (\\ref{fullpotential}) we put $\\langle\\nu_H\\rangle=0$ and \n$\\langle \\ov{\\nu}_{H}\\rangle=0$ so we have:\n\\begin{equation}\n\\begin{split}\nV= \\frac{144 \\tilde{\\kappa}^{2} S^{2}\\left( \\frac{m}{2 \\tilde{\\kappa}} - S^{2}\\right)^{2}}{\\left(3 - S^{2}\\right)^{2}}.\n\\end{split}\n\\end{equation}\nIn order to remove the singularity of the denominator, we take $m=6 \\tilde{\\kappa}$. In this case we get the following simple form\n\n\\begin{equation}\\label{S_chaotic}\nV= 144 \\tilde{\\kappa} ^{2}S^{2}\n\\end{equation}\n\n\\noindent which is of the form of a chaotic-potential.\n\nNow the kinetic energy is defined as,\n\\begin{equation}\n\\begin{split}\n\\mathcal{L}=\\frac{1}{2} K^{j}_{i} \\left(\\partial S\\right)^{2} -144 \\tilde{\\kappa} S^{2} \\quad \\text{where}\n \\quad K^{j}_{i}=\\frac{\\partial^{2} K}{\\partial S \\partial S^{*}}=\\frac{9}{\\left(3-SS^{*}\\right)^{2}}~\\cdot \n\\end{split}\n\\end{equation}\nLet $S=\\dfrac{X}{\\sqrt{2}}$ then the potential in (\\ref{S_chaotic}) becomes, $V= 72 \\tilde{\\kappa} ^{2}X^{2}$, and from the coefficient of the \nkinetic energy term we can find $X$ in terms of a canonical normalized field $\\chi$:\n\\begin{equation}\n\\begin{split}\n X =\\sqrt{6} \\tanh\\left(\\frac{\\chi}{\\sqrt{6}}\\right).\n\\end{split}\n\\end{equation}\nThe potential in terms of the canonical normalized field reads as\n\\begin{equation}\\label{ValongS}\nV= 432 \\tilde{\\kappa}^{2} \\tanh^{2}\\left(\\frac{\\chi}{\\sqrt{6}}\\right),\n\\end{equation}\n\n\\noindent which is analogous to the conformal chaotic inflation model or T-Model inflation already mentioned before. Potentials for the T-Model inflation are given in Equation (\\ref{Tmodels}). For $n=1$ the potential become, $V(\\chi)=\\uplambda \\tanh^{2}\\left(\\frac{\\chi}{\\sqrt{6}}\\right)$, which is similar to our potential in (\\ref{ValongS}) for $\\uplambda=432 \\tilde{\\kappa}^{2}$. We can understand the inflationary behaviour in these type of models, by considering two cases. \n\n First for $\\chi\\geqslant1$, by \nwriting the potential in exponential form we have\n\n\\begin{equation}\nV= \\uplambda \\left(\\frac{1-e^{-\\sqrt{\\frac{2}{3}} \\chi}}{1+e^{-\\sqrt{\\frac{2}{3}} \\chi}}\\right)^{2}=\\uplambda \\left(1-\\frac{2e^{-\\sqrt{\\frac{2}{3}} \\chi}}{1+2e^{-\\sqrt{\\frac{2}{3}} \\chi}}\\right)^{2}=\\uplambda\\left(1-2e^{-\\sqrt{\\frac{2}{3}} \\chi}\\right)^{2}\n\\end{equation}\n\n\\noindent and for large values of $\\chi$ we can write\n\n\\begin{equation}\nV\\simeq\\uplambda\\left(1-4e^{-\\sqrt{\\frac{2}{3}} \\chi}\\right)~,\n\\end{equation}\n\n\\noindent where $\\uplambda=432 \\tilde{\\kappa}^{2}$. The slow roll parameters in terms of the field $\\chi$ and for large number of e-folds ($N$) are\n\n\\begin{equation} \\label{hN}\n\\frac{d\\chi}{dN}=\\frac{V^{\\prime}}{V} =4\\sqrt{\\frac{2}{3}}e^{-\\sqrt{\\frac{2}{3}} \\chi}.\n\\end{equation} \n\n\n\t\n\\noindent Integrating~(\\ref{hN}) we have $\\int{e^{\\chi\\sqrt{2\/3}}d\\chi}=\\int{4\\sqrt{\\frac{2}{3}}dN}$, which gives the relation\n\\begin{equation}\\label{nF}\n\\begin{split}\ne^{-\\sqrt{\\frac{2}{3}}\\chi} =\\frac{3}{8 N}.\n\\end{split}\n\\end{equation}\n\n\n\\noindent Using the relation above we have for the slow-roll parameter $\\epsilon$ that,\n\\begin{equation}\n\\begin{split}\n\\epsilon=\\dfrac{1}{2}\\left(\\frac{V^{\\prime}}{V}\\right)^{2}=\\dfrac{1}{2}\\left(4\\sqrt{\\frac{2}{3}}e^{-\\sqrt{\\frac{2}{3}} \\chi}\\right)^{2}=\\frac{3}{4 N^{2}}.\n\\end{split}\n\\end{equation}\nSimilarly the second slow-roll parameter $\\eta$ is found to be,\n\\begin{equation}\n\\eta=\\left(\\frac{V^{\\prime\\prime}}{V}\\right)=-\\frac{1}{N}.\n\\end{equation}\nFinally, the predictions for the tensor-to-scalar ratio $r$ and the natural-spectral index $n_{s}$ are,\n\\begin{equation}\n\\begin{split}\nr=\\frac{12}{N^{2}}\\quad,\\quad n_{s}=1+2\\eta-6\\epsilon=1-\\frac{2}{N}-\\frac{9}{4 N^{2}}\n\\end{split}\n\\end{equation}\n\\\\\n\\noindent and for $N=60$ e-foldings we get $n_{s} \\simeq 0.9673$ and $r \\simeq 0.0032$.\n\nRegarding the case with $\\chi \\eqslantless 1$, we can see from the expression (\\ref{ValongS}) that the potential reduces to a quadratic chaotic form. The tree-level inflationary predictions in this case are $\\left(n_{s},r\\right)\\approx \\left(0.967,0.130\\right)$, which are ruled out with the latest \\emph{Planck} $2015$ results. \n\nThe discussion above strongly depends on the assumption $m=6\\tilde{\\kappa}$ that we imposed on the potential in order to simplify it. If we consider small variations of this assumption similar to \\eqref{singularitycondition2} and modify the condition as, $m=6\\tilde{\\kappa}+\\delta$, we will see that the parameter $\\delta$ contributes only to $n_{S}$ while the tensor-to-scalar ratio $r$ remains constant. \n\n\n\n\\section{CONCLUSIONS}\n\n\nIn the present work we have studied ways to realise the inflationary scenario in a no-scale supersymmetric model \n based on the Pati-Salam gauge group $SU(4)\\times SU(2)_L\\times SU(2)_R$, supplemented with a $Z_2$ discrete symmetry. The spontaneous \n breaking of the group factor $SU(4)\\to SU(3)\\times U(1)_{B-L}$ is realised via the $SU(4)$ adjoint $\\Sigma=(15,1,1)$ and the \n breaking of the $SU(2)_{R}$ symmetry is achieved by non-zero vevs of the neutral components $\\nu_{H}, \\ov{\\nu}_{H}$ of the Higgs fields\n $(4,1,2)_H$ and $(\\bar 4,1,2)_{\\bar H}$. \n\n We have considered a no-scale structure K\\\"ahler potential and assumed that the Inflaton field is a combination of \n $\\nu_{H}, \\ov{\\nu}_{H}$ and find that the resulting potential is similar with the one presented in \\cite{Ellis:2014dxa, Ellis:2016spb} \n but our parameter space differs substantially. Consequently, there are qualitatively different solutions which are presented \n and analysed in the present work. The results strongly depend on the parameter $\\xi$ and for various characteristic values of the latter\n we obtain different types of inflation models. In particular, for $\\xi=0$ and canonical normalized field $\\chi\\geq{1}$, the potential \n reduces to Starobinsky model and for $\\xi=1$ the model receives a chaotic inflation profile. The results for $0<\\xi<1$ have been analysed in detail while reheating via the decay of the inflaton in right-handed neutrinos is discussed.\n\n We also briefly discussed the alternative possibility where the $S$ field has the r\\^ole of the inflaton. In this case, the potential is exponentially \n flat for $\\chi\\geq{1}$. Similar conclusions can be drawn for the Starobinsky model. On the other hand for small $\\chi$ it reduces to a quadratic potential.\n\nIn conclusion, the $SU(4)\\times SU(2)_L\\times SU(2)_R$ model described in this paper can provide inflationary predictions consistent with the observations. Performing a detailed analysis we have shown that consistent solutions with the Planck data are found for a wide range of the parameter space of the model. In addition the inflaton can provide masses to the right-handed neutrinos and depending on the value of reheating temperature and the right-handed\nneutrino mass spectrum thermal\nor non-thermal leptogenesis is a natural outcome. Finally we mention that, in several cases the tensor-to-scalar ratio $r$, a canonical measure of primordial gravity waves, is close to$\\sim{10^{-2}}-10^{-3}$ and can be tested in future experiments.\n\n\n\n\\vspace{1cm}\n{\\bf \\large Acknowledgements}\\quad\\quad\n\n\\noindent The authors are thankful to George K. Leontaris, Qaisar Shafi, Tianjun Li and Mansoor Ur Rehman for helpful discussions and useful comments. WA would like to thank the Physics Department at University of Ioannina for hospitality and for providing conducive atmosphere for research where part of this work has been carried out. WA was supported by the CAS-TWAS Presidents Fellowship Programme.\n\n\n\n\n\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\n\\par Let $g_1$ and $g_2$ be eigencuspforms of the same weight $k\\geq 2$ and level $M_1$ and \n$M_2$ respectively. Throughout, we fix a prime $p\\geq 5$, and $\\mathfrak{p}|p$ a prime in the ring of integers $\\mathcal{O}_L$ of a suitably large number field $L$ containing the field of Fourier coefficients generated by $g_1$ and $g_2$. Assume that the $g_i$ have trivial nebentype character\nand that all but finitely many of the Hecke eigenvalues of the $g_i$ are\ncongruent modulo $\\mathfrak{p}$. Then, we say that the newforms\n$g_i$ are $\\mathfrak{p}$-congruent. In the situation of $\\mathfrak{p}$-congruent newforms, there is a general \nphilosophy that the critical values of any L-function functorially associated\nto $g_1$ and $g_2$ will also be $\\mathfrak{p}$-congruent. More generally, one \nexpects the $\\mathfrak{p}$-adic L-functions of $g_1$ and $g_2$, if they exist, to also be $\\mathfrak{p}$-congruent. Furthermore, the corresponding $p$-primary Selmer groups defined over the cyclotomic $\\mathbb{Z}_p$-extension of $\\mathbb{Q}$ should also be related.\n\n\\par Before we state our results, we require more precise notation. Let $g_1=\\sum_{n\\geq 1} a(n, g_1) q^n$ and \n$g_2 =\\sum_{n\\geq 1} a(n, g_2) q^n$ be the Fourier expansions\nof the $g_i$. We make the following assumptions for $i=1,2$:\n\\begin{itemize}\n \\item $g_i$ is not of CM-type, and has trivial central character;\n \\item $g_i$ is $\\mathfrak{p}$-ordinary, i.e., that $a(p, g_i)$ is a $\\mathfrak{p}$-adic unit;\n \\item $g_i$ are $p$-stabilized newforms, meaning that the level\n$M_i$ of $g_i$ is divisible by precisely the first power of $p$; that each $g_i$ is an eigenvector for the $p$-th Hecke\noperator $U_p$ (with unit eigenvalue), and that \nthe level of the newform\nassociated to $g_i$ has level either $M_i$ or $M_i\/p$; and\n\\item the Galois representation into $\\op{GL}_2(\\bar{\\mathbb{F}}_p)$ associated to\n $g_i$ is irreducible and $\\mathfrak{p}$-distingushed.\n\\end{itemize}\n\n\\par Let $\\mathcal{O}$ denote the completion of $\\mathcal{O}_L$ at $\\mathfrak{p}$ and write $K$ for\nthe fraction field of $\\mathcal{O}$. The newforms $g_1$ and \n$g_2$ are said to be \\emph{$\\mathfrak{p}$-congruent} if $a(q, g_1)\\equiv a(q, g_2)\\mod{\\mathfrak{p}}$ for all primes \n$q\\nmid M_1 M_2 p$. We simply say that $g_1$ and $g_2$ are $p$-congruent if they are $\\mathfrak{p}$-congruent for some prime \n$\\mathfrak{p}|p$. \n\n\n\nIn \\cite{GV00}, Greenberg and the third named author of the present work studied the main conjecture of Iwasawa theory for \nstandard 2-dimensional representations\n$\\rho_i$ \nattached to $p$-congruent modular forms $g_i$ as above. In this case, the $p$-adic L-functions of the $g_i$ \nare the well-known $p$-adic L-functions arising from modular symbols, and \nthe Selmer groups are Greenberg's $p$-ordinary Selmer groups, arising \nfrom Galois cohomology. In this situation, the main results of \\cite{GV00}\nshow that the Selmer groups and $p$-adic L-functions inherit congruence properties\nfrom the congruent modular forms, just as predicted by the general philosophy.\nMore specifically, the authors of \\cite{GV00} \nstudied the relationship between the Iwasawa invariants associated to Galois representations arising from Hecke eigencuspforms \nthat are residually isomorphic,\nand proved certain explicit formulae relating the values of these invariants.\nThese results were used to deduce certain cases of the Main Conjecture of Iwasawa\ntheory, by combining the formulae for the invariants with\ndeep results of Kato.\n\nThe primary goal of the present paper is to generalize the explict relationship\nbetween the Iwasawa invariants of the congruent forms $g_1, g_2$ from \nthe case of the standard representation of dimension 2 to the case of\nthe symmetric square representation, which has dimension 3. A secondary \naccomplishment in this paper is the complete proof of the integrality \nfor the $p$-adic functions of degree 3, since this result (although seemingly known to experts)\nseems not to be found in the literature. Implict in this discussion\nof integrality is a careful normalization of the periods appearing in the definition\nof the $p$-adic L-function. This is a subtle point: it turns out to be\nquite difficult to show that the normalization which gives rise to congruences\ncoincides with the canonical normalization given by Hida. We show that Hida's period\ngives the correct congruences if a certain variant of Ihara's lemma holds. This lemma \nis unknown in weight $k >2$ in the generality which we require, although it holds \nunconditionally in weight 2. We remark also that all the results in this paper are much\neasier to prove in the case of weight $2$ and the main novelty lies in the results for higher \nweight. \n\n\n\nTo continue, we require more notation. Let the $g_i$ be as above.\nLet $\\mathbb{Q}_{\\op{cyc}}$ denote the cyclotomic $\\mathbb{Z}_p$-extension of $\\mathbb{Q}$, i.e., the unique $\\mathbb{Z}_p$-extension of $\\mathbb{Q}$ contained in $\\mathbb{Q}(\\mu_{p^\\infty})$. Let \n$\\Lambda=\\mathcal{O}[[\\text{Gal}(\\mathbb{Q}_{\\op{cyc}}\/\\mathbb{Q})]]$ denote the usual\nIwasawa algebra. For $i=1,2$, let\n\\[\\rho_{g_i}:\\op{Gal}(\\bar{\\mathbb{Q}}\/\\mathbb{Q})\\rightarrow \\op{GL}_2(\\mathcal{O})\\]be the associated Galois representation.\nIf we fix a rank-$2$ Galois-stable $\\mathcal{O}$-lattice $T_{g_i}$ in the $p$-adic representation associated to $g_i$, normalized as\nin \\cite{lz16},\nwe may view the representations $\\rho_{g_i}$ as taking values in $\\mathcal{O}$. For $i=1,2$, the residual representation is \ndenoted $\\bar{\\rho}_{g_i}:=\\rho_{g_i}\\mod{\\mathfrak{p}}$. \nSince the modular forms $g_1$ and $g_2$ are $\\mathfrak{p}$-congruent,\nand since we will assume throughout this paper that $\\bar{\\rho}_{g_1}$ and $\\bar{\\rho}_{g_2}$ are \\emph{absolutely irreducible}, \nwe find that the the semisimplifications $\\bar{\\rho}_{g_1}^{\\op{ss}}$ \nand $\\bar{\\rho}_{g_2}^{\\op{ss}}$ are isomorphic.\n\n\nLet $\\psi$ denote a Dirichlet character of conductor $c_\\psi$, where $(c_\\psi, 2pM_1M_2)=1$. In this paper we will assume that\n\\begin{itemize}\n \\item $\\psi$ is even, and non-quadratic, and\n \\item the coefficient field $K$ contains the values of $\\psi$\n\\end{itemize}\nLet\n$r_{g_i}=\\op{Sym}^2(\\rho_i)$ denote the symmetric square representation\nfor $g_i$, with $i=1,2$, viewed as taking values in the symmetric square\nof the lattice $T_{g_i}$. \nIn this setting the representations\n$r_{g_i}\\otimes\\psi:\\op{Gal}(\\bar{\\mathbb{Q}}\/\\mathbb{Q})\\rightarrow \\op{GL}_3(\\mathcal{O})$ are residually isomorphic. Let $\\mathbf{A}_{i, \\psi}$ be the \n$p$-primary \nrepresentation associated to the underlying Galois stable $\\mathcal{O}$-lattice for $r_{g_i}\\otimes\\psi$. Note that since $g_i$ is \n$p$-ordinary, \nso is $r_{g_i}\\otimes\\psi$. We work with the primitive Selmer group $\\op{Sel}_{p^\\infty}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ as defined by Greenberg in \n\\cite{Gre89}. It is shown by Loeffler and Zerbes in \\cite{lz16} that $\\op{Sel}_{p^\\infty}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ \nis $\\Lambda$-cotorsion and this allows us to define a nonzero algebraic $p$-adic L-function \n$L^\\text{alg}(r_{g_i}\\otimes\\psi)\\in \\Lambda$. We also point out that the existence of $L^\\text{alg}(r_{g_i}\\otimes\\psi)\\in \\Lambda$ and part of the main conjecture is \nproven, in some cases, in unpublished\nwork of Urban \\cite{urban06}. The arguments in this paper do not assume the main conjecture. By the Weierstrass preparation theorem, \n\\[L^\\text{alg}(r_{g_i}\\otimes\\psi)=p^{\\mu} a(T) u(T),\\] where $a(T)$ is a distinguished polynomial and $u(T)$ is a unit in $\\Lambda$. The $\\mu$-invariant $\\mu^{\\op{alg}}(r_{g_i}\\otimes\\psi)$ is the number $\\mu$ in the above factorization, and the $\\lambda$-invariant $\\lambda^{\\op{alg}}(r_{g_i}\\otimes\\psi)$ is the degree of $a(T)$.\n\n\\par Next, we define a primitive $p$-adic L-function $L^\\text{an}(r_{g_i}\\otimes\\psi)\\in \\Lambda$, for $i=1,2$.\nThis is essentially done in old work of Schmidt and others; see \\cite{schmidt88} for the basic source,\nand the discussion in \\cite{lz16} for an account of the various refinements. For the most part, we shall adopt the notation of \\cite{lz16}, which is different from \nthat of \\cite{schmidt88}. The reason for this is\nthat the authors of \\cite{lz16} discuss and define the \n Selmer groups corresponding to their L-functions,\n while there is no convenient reference for the\nSelmer groups corresponding correctly to the L-functions as normalized in \\cite{schmidt88}. Under the present hypotheses, Schmidt proves the existence of an element $L^\\text{an}(r_{g_i}\\otimes\\psi)\\in \\Lambda\\otimes\\mathbb{Q}$ satisfying\na certain interpolation property with respect to special values of the complex symmetric square L-function. The authors of \\cite{lz16}\nprovide a convenient summary of the rather complicated history of this result. The interpolation property defining\nthe $p$-adic L-function\nis given at the end of Section 2 below. \n\nIt is important to remark that the definition of $L^\\text{an}(r_{g_i}\\otimes\\psi)$\npresupposes the choice of certain transcendental period, and that the convention that we use differs in one important respect\nfrom that of \\cite[section 2]{lz16}. As a result of our convention, the $p$-adic L-function we work with is actually contained in $\\Lambda$. We discuss this normalization in further detail in Section 2 of this paper, and summarize the key points later in this introduction.\n According to our normalization, the main conjecture then predicts the equality\n$$L^\\text{alg}(r_{g_i}\\otimes\\psi)= u(T)\\cdot L^\\text{an}(r_{g_i}\\otimes\\psi)\\in \\Lambda$$\nwhere $u(T)$ is a unit in $\\Lambda$. We remark that the congruence ideal which appears in the statement of the Main\nConjecture in \\cite{lz16} does not play a role here, since our definition of the period incorporates this factor. We remark also that in the convention of \\cite{lz16}, $\\Lambda$ refers to the completed\ngroup algebra of ${\\operatorname{Gal}}(\\mathbb{Q}(\\mu_{p^\\infty})\/\\mathbb{Q})\\cong\\mathbb{Z}_p^\\times$, whereas we have taken $\\Lambda$ to be the group algebra\nof ${\\operatorname{Gal}}(\\mathbb{Q}_{\\op{cyc}}\/\\mathbb{Q})\\cong \\mathbb{Z}_p\\cong 1+p\\mathbb{Z}_p$. In other words, we do not cover the case of nontrivial tame cyclotomic twists. The methods\nof this paper apply equally well in this excluded case, but we have chosen to avoid it, mostly for simplicity. \n\nAssuming the integrality of the $p$-adic L-function, we have \nwell-defined Iwasawa invariants \n$\\lambda^\\text{an}(r_{g_i}\\otimes\\psi), \\mu^\\text{an}(r_{g_i}\\otimes\\psi)\n\\in \\mathbb{Z}_{\\geq 0}$ on the \nanalytic side as well. The main conjecture implies that \n$$\\lambda^\\text{an}(r_{g_i}\\otimes\\psi)=\\lambda^{\\op{alg}}(r_{g_i}\\otimes\\psi)$$ and\n$$\\mu^\\text{an}(r_{g_i}\\otimes\\psi)\n=\\mu^{\\op{alg}}(\\rho_{g_i}\\otimes\\psi)$$ for $i=1,2$. \n\nOur goal is to relate the invariants for the congruent forms $g_1$ and $g_2$. However, the primitive invariants\nas defined above are not related at all; it is quite possible for the primitive invariants to be trivial in one case \nyet highly nontrivial in the other. The correct relationship, as discovered in \\cite{GV00}, is between\n\\emph{imprimitive} Iwasawa invariants, which we now proceed to define. A key to our construction \nis the specification of certain sets $\\Sigma$ of primes, which we now proceed to elaborate:\nwe let $\\Sigma$ denote a finite set of prime\nnumbers $q\\neq p$ such that\n\\begin{itemize}\n\\item $2\\in\\Sigma$;\n\\item if $q\\in \\Sigma$ is odd, then $\\op{min}(\\op{ord}_q(M_1),\\op{ord}_q(M_2))< 2$, and \n\\item if $q\\vert M_1M_2$ is such that \n $\\op{min}(\\op{ord}_q(M_1),\\op{ord}_q(M_2))< 2 $, then $q\\in\\Sigma$.\n \\end{itemize}\nIn other words, if we write $a_i=\\op{ord}_q(M_i)$, then $\\Sigma$ includes $2$, and all odd $q$ for which $\\{a_1, a_2\\} = \\{0, n\\}$ or $\\{1, n\\}$ as \nunordered sets, for any $n>0$, as well\nas some other odd primes for which $a_1=a_2=0$. Then one has the following basic lemma, which seems to be \ndue to Livn\u00e9 \\cite{livne}; it is stated in the form we need\nby Carayol \\cite{carayol}, page 789. \n\\begin{Lemma}\n\\label{dtlemma}\nFor an odd prime $q\\in \\Sigma$, we have that $\\op{max}(\\op{ord}_q(M_1),\\op{ord}_q(M_2))\\leq 2$.\nFurthermore, if $q\\notin\\Sigma$, then $\\op{ord}_q(M_1)= \\op{ord}_q(M_2)$. For $q=2$, we either\nhave $\\op{max}(\\op{ord}_2(M_1),\\op{ord}_2(M_2))\\leq 2$, or else $\\op{ord}_2(M_1)=\\op{ord}_2(M_2)$.\n\\end{Lemma}\nThe essential point is that the integers $M_i$ are not too far from the Artin conductor\nof their common residual representation, and hence from each other. Otherwise stated, the \\emph{only} way the exponents\n$a_1, a_1$ can be different at \\emph{any} prime $q$, including $q=2$, is if $\\{a_1, a_2\\} $ is one of $\\{0, 1\\}$ or $\\{1, 2\\}$ or $\\{0, 2\\}$, \nas unordered sets.\nAn examination of Livne's proof shows that this follows from the fact\nthat the Swan conductor of the restriction of $\\rho_i$ to $\\op{Gal}(\\bar{\\mathbb{Q}}_q\/\\mathbb{Q}_q)$ coincides\nwith the Swan conductor of its reduction, since \nthe wild ramification is a pro-$q$ group. The importance of this lemma in our work cannot\nbe overstated -- our arguments for $p$-adic L-finctions\nwould fail if there were\nsome prime $q$ at which the levels of $g_1$ and $g_2$ differed by a high\npower. \n\nLet $N$ denote the least common multiple of $M_1, M_2$, and $\\prod_{q\\in\\Sigma} q^2$. In view of the lemma above, the \nprime factorization of $N\/M_i$ is of the form $\\prod_{q\\in\\Sigma}q^{e_i},$ with $e_i\\leq 2$. Let $S$\nbe the set of primes $q| N$ and the primes dividing the conductor of $\\psi$ (which is away from from $2N$). \nSet $S_0:=S\\backslash \\{p\\}$. For each $i$,\nwe have an imprimitive Selmer group, obtained by relaxing the local\nconditions at all the primes $q\\in S_0$. It is shown\nin \\cite{GV00} that the imprimitive\nSelmer group is cotorsion if and only if the primitive Selmer group\nis so. Thus, once again we have imprimitive invariants\n$\\lambda^{\\op{alg}}_{S_0}(r_{g_i}\\otimes\\psi)$ and $\\mu^{\\op{alg}}_{S_0}(r_{g_i}\\otimes\\psi)$. The basic result is the\nfollowing:\n\n\\begin{Proposition}\n\\label{algebraic-invariants-intro}\nThe following statements hold:\n\\begin{itemize}\n \\item $\\mu_{S_0}^{\\op{alg}}(r_{g_i}\\otimes\\psi)=\n \\mu^{\\op{alg}}(r_{g_i}\\otimes\\psi)$, and\n \\item $\\lambda_{S_0}^{\\op{alg}}(r_{g_i}\\otimes\\psi)=\n \\lambda^{\\op{alg}}(r_{g_i}\\otimes\\psi) + \\sum_{q\\in S_0} \\sigma_i^{(q)}(\\psi).$\n\\end{itemize}\n\\end{Proposition}\n\nHere the integers $\\sigma_i^{(q)}(\\psi)$ are the degrees of certain polynomials\ncoming from applying the Weierstrass preparation theorem to the \nannihilators of certain local cohomology groups, which are known\nunconditionally to be torsion, and whose annihilators can be \ndescribed explicitly in terms of Euler factors. \nMost of this is carried through from \\cite{GV00}, where an analysis of the local conditions\nis made under very mild hypotheses. \n\nSimilar considerations apply to the $p$-adic L-function. Following\na construction due originally to Coates, Hida, and Schmidt, there\nexists an imprimitive $p$-adic L-function $L^{\\op{an}}_\n\\Sigma(r_{g_i}\\otimes\\psi)\\in \\Lambda$, with corresponding\ninvariants $\\lambda_{\\Sigma}^{\\op{an}}(r_{g_i}\\otimes\\psi)$ and $\\mu^{\\op{an}}_{\\Sigma}(r_{g_i}\\otimes\\psi)$. This L-function\nis produced by interpolation of special values of a degree three Euler product given by Shimura (see (\\ref{shimura-euler-product}) below)\n and deleting Euler factors\nat the primes in $\\Sigma$. An important observation is that, at each prime $q\\notin S_0\\backslash \\Sigma$, the levels\n$M_1$ and $M_2$ are divisible by $q^2$ . This is by our choice of $\\Sigma$. It follows from this, and \nour assumption that the forms have trivial central character, that the Euler factor at $q\\in S_0\\backslash \\Sigma$ in \nShimura's Euler product \n is trivial. We will discuss this point further in the sketch of proof.\n Therefore, we have $L^{\\op{an}}_\\Sigma(r_{g_i}\\otimes\\psi)= L^{\\op{an}}_{S_0}(r_{g_i}\\otimes\\psi)\\in \\Lambda$ \nand \\[\\lambda_{\\Sigma}^{\\op{an}}(r_{g_i}\\otimes\\psi)=\\lambda_{S_0}^{\\op{an}}(r_{g_i}\\otimes\\psi)\\text{ and } \\mu^{\\op{an}}_{\\Sigma}(r_{g_i}\\otimes\\psi)=\\mu^{\\op{an}}_{S_0}(r_{g_i}\\otimes\\psi).\\] \n\nWe remark that\nthe logical structure of the argument to construct $p$-adic L-functions\nis the opposite of what one might imagine -- one starts with \nthe imprimitive L-function and then passes to the primitive object\nby dividing out by explicit Euler factors. The fact that the \nresulting primitive L-function, \\emph{a priori} meromorphic,\nis actually analytic, is proven in \\cite{schmidt88}, under our\nhypotheses that the $g_i$ are not CM forms.\n\nOur main result in Section 2 is the following. As stated here, the result is dependent\non the validity of a certain variant of Ihara's lemma (see Hypothesis \\ref{hypothesis ihara}). The theorem\nholds unconditionally for $k=2$, and even for $k>2$, one can make an unconditional statement at the cost\nof introducing a certain ambiguity in the choice of periods. The only role played by Ihara's lemma\nis to remove the dependence of the period of the imprimitive L-function on the choice of the auxiliary\nprimes in the level, in the sense that the periods appearing in the imprimitive L-function may be different\nfrom those appearing in the primitive one. \n\\begin{Proposition} Assume that Hypothesis \\ref{hypothesis ihara} holds. The following statements hold.\n\\label{analytic-invariants-intro}\n\\begin{itemize}\n \\item $\\mu_{S_0}^\\text{an}(r_{g_i}\\otimes\\psi)=\n \\mu^{\\op{an}}(r_{g_i}\\otimes\\psi)$, and\n \\item $\\lambda_{S_0}^{\\op{an}}(r_{g_i}\\otimes\\psi)=\n \\lambda^{\\op{an}}(r_{g_i}\\otimes\\psi) + \\displaystyle\\sum_{q\\in S_0} \\sigma_i^{(q)}(\\psi).$\n\\end{itemize}\n\\end{Proposition}\n\n\\noindent where the integers $\\sigma_i^{(q)}(\\psi)$ are the \\emph{same} as the ones occurring\nin the algebraic case, since, the Euler factors in the algebraic\nand analytic sides are exactly the same. This is a deep fact: \nthe equality between the Galois-theoretic Euler factors \nin the algebraic case and the complex Euler factors in the analytic\nL-function is the local Langlands correspondence for the \n3-dimensional representations $r_{g_i}\\otimes\\psi$; see\n\\cite{GJ78}, or \\cite{schmidt88}. \n\nWith this preparation, we may now state our remaining results. Recall\nthat the character $\\psi$ is assumed to be \neven and non-quadratic, and to have conductor $c_\\psi$ coprime to $M_1M_2$,\nas well as to the primes in $\\Sigma$. \n\n\\par Our first task is to deal with the integrality of the $p$-adic L-functions, since\nthis integrality property is a folklore result for which a complete proof seems \nnot to have ever been written down. It is stated as Proposition 2.3.5\nin \\cite{lz16}, where it is attributed to Hida, but no reference is given. \nDespite searching many papers of Hida, we were only able to find a discussion for the case of weight $2$, in some notes from an instructional\nconference in India. Therefore, we provide a through discussion of integrality, and give an integral construction valid for all weights. The proof turns out to be somewhat delicate, once the weight is large compared to\n$p$. For the convenience of the reader, we will give the definition of the canonical period (associated to a choice of level) as part of the sketch of the proof later on in this introduction. \n\n\\begin{Th}\n\\label{integrality-thm-intro} \nLet the notation be as above.\nThen the primitive L-functions $L^\\text{an}(r_{g_i}\\otimes\\psi)\\in \\Lambda$ are integral, normalized with Hida's canonical period, as in \\cite{lz16}. \nThe imprimitive $p$-adic L-functions \n$L^\\text{an}_\\Sigma(r_{g_i}\\otimes\\psi)= L^\\text{an}_{S_0}(r_{g_i}\\otimes\\psi) \\in \\Lambda$ are integral as well, for the same choice\nof periods as in the primitive case. \n\\end{Th}\n\n\nWith regard to the Iwasawa invariants, our result is as follows. Once again, we include the Ihara Lemma\nas a hypothesis, since formulating a general result without it would give a clumsy statement. Consider the pair\n$g_1, g_2$ of $p$-congruent forms, of level $M_1, M_2$ respectively. Let $S$ denote any set of primes $q$ containing\n$q=2$ and all primes dividing $M_1M_2$. Let $S_0=S\\backslash \\{p\\}$. \n\\begin{Th}\n\\label{intro-thm}\nLet the notation be as above, and assume Hypothesis \\ref{hypothesis ihara}.\nThen the following statements hold.\n\\begin{enumerate}\n\\item If $\\mu_{S_0}^\\text{an}(r_{g_1}\\otimes\\psi) = 0$, we have\n$\\mu_{S_0}^\\text{an}(r_{g_2}\\otimes\\psi) = 0$, and \n$\\lambda_{S_0}^\\text{an}(r_{g_1}\\otimes\\psi) =\n\\lambda_{S_0}^\\text{an}(r_{g_2}\\otimes\\psi)$\n\\item If $\\mu_{S_0}^{\\op{alg}}(r_{g_1}\\otimes\\psi) = 0$, we have\n$\\mu_{S_0}^{\\op{alg}}(r_{g_2}\\otimes\\psi) = 0$, and \n$\\lambda_{S_0}^{\\op{alg}}(r_{g_1}\\otimes\\psi) =\n\\lambda_{S_0}^{\\op{alg}}(r_{g_2}\\otimes\\psi)$\n\\item If $\\mu_{S_0}^\\text{an}(r_{g_1}\\otimes\\psi) = \\mu_{S_0}^{\\op{alg}}(r_{g_1}\\otimes\\psi) = 0$, and\n$\\lambda_{S_0}^\\text{an}(r_{g_1}\\otimes\\psi)=\n\\lambda_{S_0}^{\\op{alg}}(r_{g_1}\\otimes\\psi)$, then\n$\\mu_{S_0}^\\text{an}(r_{g_2}\\otimes\\psi) = \\mu_{S_0}^{\\op{alg}}(r_{g_2}\\otimes\\psi) = 0$, and\n$\\lambda_{S_0}^\\text{an}(r_{g_2}\\otimes\\psi)=\n\\lambda_{S_0}^{\\op{alg}}(r_{g_2}\\otimes\\psi)$\n\\end{enumerate}\n\\end{Th}\n\n\nThe theorem holds unconditionally for $k=2$. As we have remarked, it is possible to give an unconditional statement for all weights, \nif one is willing to let the period depend on the level, or if $p >k$. The essential point is that there exists\n\\emph{some} natural choice of periods \nthat gives ride to a congruence\nof $p$-adic L-functions, but it is not clear, without\nthe additional Hypothesis, \nthat the periods coincide with those specified by\nHida as in the previous theorem.\n\nIt is clear from the relationships given in Propositions \\ref{algebraic-invariants-intro} and \\ref{analytic-invariants-intro}, that the third statement follows from\nthe first two. Furthermore, it follows from the third statement\nthat if one knows the main conjecture and vanishing of the $\\mu$-invariants for $g_1$, that the same conclusions follow for\n$g_2$. Examples where the main conjecture is known for a particular\nform may be found in \\cite{lz16}. \n\nTo end this introduction, we give a sketch of the arguments and indicate the various difficulties\nand novelties needed to overcome them.\n\n\nThe main difficulties occur on the analytic side, starting with the integrality property described above.\nFurthermore, the proof of the required congruences of $p$-adic L-functions turns out to be \nsomewhat more delicate than in the case of \\cite{GV00}, and relies crucially on Lemma \\ref{dtlemma}\nand the very specific set $\\Sigma$ we have chosen. \nTo get the required results, one has to redo the Coates-Hida-Schmidt construction of the $p$-adic \nL-function, which goes back almost 25 years, \nand apply various subtle refinements that were not available at that time. \n\nWe briefly recall the steps in the construction. For notational simplicity, assume that $g=g_i$ is a $p$-stabilized\nnewform for some \nfixed value $i\\in \\{1,2\\}$ and set $M:=M_i$. Then $M$ is divisible by precisely the first power of $p$. \nIf $g=\\sum a(n ,g)q^n$, then the Dirichlet series $\\sum_n a(n, g)n^{-s}$ has an Euler product of the form\n$$\\prod_q(1-\\alpha_qq^{-s})^{-1}(1-\\beta_qq^{-s})^{-1}$$\nwith certain parameters $\\alpha_q, \\beta_q$ at each prime $q$ (including $p$). If $q$ divides $M$, then\n one or both of these parameters may be zero. \n \nNow let $\\chi=\\psi \\eta$ be an even Dirichlet character of conductor $c_\\psi p^r$. \nHere, $\\eta$ is a finite order character of conductor $c_\\eta=p^r$. We assume that\n$\\chi$ is not quadratic. Let $T$ denote any set of prime numbers such that $p\\notin T$. \nThen the $T$-imprimitive naive symmetric square $L$-function of the is defined as follows:\n\n\\begin{equation}\n\\label{shimura-euler-product}\n\\mathscr{L}_T(r_g\\otimes\\psi, s) = \n\\prod_{q\\notin T} \\left( (1-\\chi(q)\\alpha_q\\beta_q q^{-s})(1-\\chi(q)\\beta_q^2q^{-s})(1-\\chi(q)\\alpha_q^2q^{-s})\\right)^{-1},\n\\end{equation}\nwhere $\\alpha_q$ and $\\beta_q$ are determined from the degree 2 Euler product as above. \nObserve that since $g$ is assumed to have trivial central character, the Euler factor above is trivial as soon as $q^2$ divides the level $M$. \nThus we can enlarge the set $T$ by including all primes $q$ with $q^2\\vert M$ without changing the L-function. When $T$ is the empty\nset, we denote the resulting function as $D_g(\\chi, s)$. This is nothing but the function denoted by $D(s, g, \\chi)$ by Shimura in \\cite{shimura-holo}. We remark that Shimura never uses the fact that $g$ is a $p$-stabilized newform; rather he uses only that the standard degree 2 L-function\nassociated to $g$ admits an Euler product of the above shape, to get the parameters $\\alpha_q, \\beta_q$. This will be important below in dealing\nwith the imprimitive situatiuon. \n\nNow fix a set $\\Sigma$ satisfying the conditions above. In practice $\\Sigma$ will depend on $g_{i+1}$, where we read the indices modulo\n$2$, but we do not need that here. If $g=\\sum a(n ,g)q^n$, let $f=\\sum_n a(n, f)q^n$, where $a(n, f)=0$ if $n$ is divisible by\nany prime in $\\Sigma$, and $a(n, f) = a(n, g)$ if not. Let $N$ denote the level of $f$. Under our conditions on $\\Sigma$, \nwe have that if $q\\vert N\/M$, then $\\text{ord}_q(N)= 2$. The form $f$ is an eigenvector for all the Hecke operators of level $N$, and \nthe eigenvalue of $U_q$ on $f$ is zero, for every prime $q\\neq p$ dividing $N$. \nThe standard $L$-function of $f$ admits an Euler product, and we get integers $\\alpha_q', \\beta_q'$ associated to $f$ just as before,\nso that $\\alpha_q=\\alpha'_q, \\beta_q=\\beta'_q$ if $q\\notin\\Sigma$, and $\\alpha_q=\\beta_q=0$ if not. \n\n\nThen we can follow Shimura and \ndefine a degree three Euler product for $f$ just as above, with $\\alpha_q',\\beta_q'$ instead of $\\alpha_q, \\beta_q$. This time \nwe take $T$ to be the empty set, so we get Shimura's $D_f(\\chi, s)$. \nIt is easy to see that $D_f(\\chi, s)$ is nothing but $L_\\Sigma(r_g\\otimes\\psi, s) = L_{S_0}(r_g\\otimes\\psi, s)$. \nWith these notations, the imprimitive $p$-adic L-function $L^\\text{an}_\\Sigma(r_{g}\\otimes\\psi)=L^\\text{an}_{S_0}(r_{g}\\otimes\\psi)$ \nis defined via interpolation\nof $L^\\text{an}_\\Sigma(r_{g}\\otimes\\psi, s) = L^\\text{an}_{S_0}(r_{g}\\otimes\\psi, s) = D(f,\\chi, s)$ \nat the critical values of $s$, namely, for $s=n\\in\\mathbb{Z}$ with\n$n$ odd and $0< n < k$, and for $\\eta$ varying over cyclotomic characters of $p$-power\norder.\n\n\\begin{Remark} \nSince it is a somewhat confusing point, we remark that the $L$-function $D_g(\\chi, s)$\nis \\emph{not} the primitive $L$-function $\\mathscr{L}(r_g\\otimes\\psi, s)$ of degree three attached to the symmetric square lift of $g$ to $\\op{GL}_3$.\n This is because $\\mathscr{L}(r_g\\otimes\\psi, s)$ may have nontrivial Euler factors at the primes dividing the level $M_0$, even at primes $q$ with\n$\\alpha_q=\\beta_q=0$, so that Shimura's Euler\nproduct has the factor $1$. \nThus\nthe term ``imprimitive'' is used in \\cite{lz16} to refer \nto $D_g(\\chi, s)=\\mathscr{L}_T(r_g\\otimes\\psi, s)$ when $T$ is the empty set. Our $D_f(\\chi, s)$ is even less primitive than the already defective\n $D_g(\\chi, s)$. The function $D_f(\\chi, s)$ does not appear in \\cite{lz16}. \n\\end{Remark}\n\nFor the present, we focus on $D_f(\\chi, s)$. The starting point is Shimura's formula expressing \n $D_f(\\chi, s)$ in terms of the Petersson inner product of a theta series and an Eisenstein series:\n \\[(4\\pi)^{-s\/2}\\Gamma(s\/2) D_f(\\chi, s) = \\langle f, \\theta_{\\overline\\chi}(z)\\Phi(z,\\overline\\chi, s)\\rangle_{N_\\chi},\\]see \\eqref{sturm relation} and the discussion preceding it. \n \n\\par For an odd integer $n$ in the range\n$1\\leq n\\leq k-1$, set $H_{\\overline\\chi}(n) := \\theta_{\\overline\\chi}(z)\\Phi(z,\\overline\\chi, n)$. Shimura in \\cite{shimura-holo} has shown that $H_{\\overline\\chi}(n)$ is a nearly holomorphic modular form of level\n$N_\\chi$, weight $k$, and trivial character. Our first job is to prove that the values in question\nare algebraic and integral, when divided by Hida's canonical period. The algebraicity is well-known, and was basically \nproven by Shimura himself:\nit follows from the fact that the Fourier coefficients of $H_{\\overline\\chi}(n)$\ncan be calculated explicitly, and turn out to be algebraic (and even integral). Then one \nhas to replace the nearly holomorphic form with its holomorphic projection. This projection will\nhave algebraic Fourier coefficients, so Shimura's method shows\nthat the transcendental part of the inner product $(f, H_{\\overline\\chi}(n))$\nis just the Petersson inner product of $f$ with itself. This procedure is carried out in \\cite{schmidt86},\nand various other works. However, a certain amount of work beyond that of Shimura is required to deal with possible vanishing of the \nspecial values owing to zeroes of the Euler factors. \n\n\\begin{Remark} We note that Schmidt works with $D_g(\\chi, s)$ and not $D_f(\\chi, s)$. \nHowever, it is clear that once one has a $p$-adic L-function interpolating values of $D_g(\\chi, s)$,\nthat one can get an interpolation of $D_f(\\chi, s)$: one has only to multiply by the finitely many Euler factors by which these objects\ndiffer. Alternatively, one can simply verify that Schmidt's construction works for $D_f(\\chi, s)$; this is essentially\ncarried out in Section 2 below. The hard part of Schmidt's work is to go from $D_g(\\chi, s)$ to the primitive $\\mathscr{L}(r_g\\otimes\\psi, s)$,\nwhich requires \\emph{division} by certain Euler factors, and it is highly nontrivial to show that the quotient is analytic. \nIn this paper, we shall take for granted the existence of the imprimitive\n$p$-adic L-function $L^{\\op{an}}_{S_0}(r_g\\otimes\\psi)$ interpolating the special values of $D_f(\\chi, s)$. \n\\end{Remark}\n\nWe now continue with the sketch, and give some idea of what is involved in normalizing the periods and verifying integrality.\nAn unfortunate feature of Schmidt's construction is that his holomorphic projection destroys integrality, since the holomorphic projection \nin general introduces\ndenominators dividing $k!$. This is the reason why Schmidt and subsequent authors are unable to\nprove integrality. To get around this, we have to change tactics, and use methods from $p$-adic\nmodular forms -- we replace the holomorphic projection with the ordinary projection, which is denominator-free. This is enough for our purposes, since we are dealing with $f$ ordinary. \n\nIn consideration of integrality, one has to of course be careful about the exact periods which\nappear. As is well-known (see the statement of Proposition 2.3.5 of \\cite{lz16}) one cannot\nsimply use the Petersson norm of $f$ or $g$ -- one has to scale by a certain congruence number. We now explain the definition of the periods, and show how the congruence number manifests itself. \n\nThe key idea (due to Hida) is that the Petersson inner product is related to a certain algebraic inner\nproduct, up to scalar multiple. Let $S_k(N,\\mathcal{O})$ be the space of cusp forms of weight $k$ and level $N$ with coefficients in $\\mathcal{O}$, \nand $\\textbf{T}$ the ring generated by Hecke operators acting on $S_k(N,\\mathcal{O})$. Let $\\mathcal{P}_f$ be the kernel of the map $\\textbf{T}\\rightarrow \\mathcal{O}$ associated to $f$ and $\\mathfrak{m}$ the unique maximal ideal of $\\textbf{T}$ generated by $\\mathcal{P}_f$ and $\\mathfrak{p}$. We have assumed that the residual representation associated to $f$ is absolutely irreducible, ordinary, and $p$-distinguished, so\nit follows that $\\textbf{T}_{\\mathfrak{m}}$ is Gorenstein. This induces an algebraic duality pairing \n\\[\n(\\;\\cdot, \\cdot)_N: S_k(N, \\mathcal{O})_\\mathfrak{m}\\times S_k(N,\\mathcal{O})_\\mathfrak{m} \\rightarrow \\mathcal{O},\\]\nsee \\eqref{integral-pairing} and the discussion preceding it for more details. We need to compare the algebraic pairing defined above to the usual Petersson inner product. We define a modified Petersson product on $S(N, \\mathbf{C})$ by setting\n\\begin{equation}\n\\label{modified-petersson-2}\n\\{v, w\\}_N = \\langle v, w^c\\vert W_N\\rangle_N\n\\end{equation}\nwhere the pairing on the right is the Petersson product. The superscript $c$ denotes complex conjugation\non the Fourier coefficients, and $W_N$ is the Atkin-Lehner involution. It is then shown that the two pairings are essentially scalar \nmultiples of each other, thus $\\{f , f\\}_N=\\Omega_N (f, f)_N$, where $\\Omega_N$ is the canonical period and \nwell-defined up to $\\mathbb{Z}_p$. Equivalently, $\\Omega_N = \\frac{(f, f)_N}{\\{f , f\\}_N}$. The numerator of this \nfraction is the so-called congruence number for $f$.\n\nAt this point, the reader will note that the entire analysis of special values as described above \nis carried out in the context of the imprimitive form $f.$ This raises red flags, since Hida's construction\nrelies crucially on some kind of multiplicity one result, for instance, \nto show that the pairings are scalar multiples of each other. Furthermore,\nit is not clear that the pairings $\\{f, f\\}_N$ or $(f, f)_N$ are non-zero, without some kind \nof semisimplicity in the Hecke algebra. We are able to circumvent\nthis problem because we are restricting attention to maximal ideals $\\mathfrak{m}$ where\n$U_q\\in \\mathfrak{m}$ at all bad primes $q\\neq p$, and because by Lemma \\ref{dtlemma} the auxiliary level $N\/M$ is cube-free,\nand divisible only by primes for which $N$ itself is cube-free. The point is that if $q\\nmid M$, where $M$ is the level\nof the $p$-stabilized newform $f$, then $U_q$ at level $N$ can have at most 3 different\neigenvalues: $\\alpha_q, \\beta_q, 0$, with $\\alpha_q\\beta_q =q^{k-1}$, on any form obtained from $f$ by degeneracy maps.\nThus the nonzero eigenvalues of $U_q$ \nare $p$-adic units. The cube-free condition allows us to rule out any failure of semisimplicity in the \ngeneralized eigenspace for $U_q=0$, and the imprimitive form $f$ is chosen to lie in exactly this eigenspace. \nWe remark that this argument would fail in the presence \nof auxiliary level which were divisible by a cube.\n\nA further -- and more stubborn -- point arises from comparion of the periods of $f$ at level $N$ (which may vary depending\non $\\Sigma$, which in turn depends on $g_2$) with the canonical periods of the $p$-stabilized newform $g$ at level $M$. \nRelating these periods\nrequires us to assume that a certain version of Ihara's lemma is satisfied, see Hypothesis \n\\ref{hypothesis ihara}. The result is known unconditionally in the case when $k=2$, but is not fully resolved in all\nthe higher weight cases. We are therefore required to carry around Hypothesis \\ref{hypothesis ihara}.\nIf the reader is willing to allow the periods to depend on the level, then our results become unconditional.\n\n Finally, for the purposes of relating the Iwasawa invariants of $g_1$ and $g_2$, we must\n show that if $g_1$ and \n $g_2$ are $\\mathfrak{p}$-congruent modular forms, then we may simultaneously add primes to the level to obtain \n imprimitive modular forms $f_1$ and $f_2$ of the same level for which \\emph{all} Fourier coefficients are \n $\\mathfrak{p}$-congruent, and for which all $U_q$ eigenvalues are zero. The fact that the level so obtained \n satisfies our cube-free condition is an application of \nLemma \\ref{dtlemma}, whose significance cannot be overstated. Once a level is determined, we use properties \nof the algebraic pairing, to show\n that normalized special values of $p$-adic $L$-functions associated to $r_{g_i}\\otimes\\psi$ and $r_{g_2}\\otimes\\psi$ \n are $\\mathfrak{p}$-congruent, see Theorem \\ref{special values congruence}. \nAs a result, we obtain a relationship for the analytic Iwasawa invariants associated with \n$r_{g_1}\\otimes\\psi$ and $r_{g_2}\\otimes\\psi$. \n\n\n\\par \nOn the algebraic side, there is little difficulty. The results on Galois cohomology\nin \\cite{GV00} are quite general,\nand apply to the situation treated here, so require little more than translation. In section \\ref{s 4}, we introduce the \\emph{fine Selmer group}. The residual Selmer group is seen to be finite precisely when $\\mu$-invariant vanishes. We relate the finiteness of the residual Selmer group to the vanishing of the $\\mu$-invariant of the fine Selmer group and establish a natural criterion for the finiteness of the residual Selmer group, see Theorem \\ref{muzeroconditions}. This foreshadows a residual Iwasawa theory purely associated with the residual representation.\n\n\\section*{Acknowledgments}\nThe authors would like to thank Haruzo Hida, Antonio Lei, Giovanni Rosso and Eric Urban for helpful comments.\n \n\n\n\\section{Congruences for symmetric square L-functions}\n\n\\subsection{Definitions and normalizations}\n\\label{assumptions-and-definitions} Let $p\\geq 5$ denote a prime, and fix a prime $\\mathfrak{p}$ of $\\bar{\\mathbb{Q}}$ with residue characteristic $p$. Let $M\\geq 1$ be an integer such that $M=M_0p$, where $p\\nmid M_0$.\nLet $g$ denote a $\\mathfrak{p}$-stabilized newform of even weight $k$ for the group $\\Gamma_0(M)$. Denote the newform associated to $g$ by $g_0$. Note that $g_0$ has level $M_0$ or $M_0p$. Assume throughout that $g_0$ is not of CM type, that is, $g_0\\otimes\\chi\\neq g_0$ for any Dirichlet character $\\chi$. Furthermore, assume that the nebentype character of $g$\nis trivial.\n\\par If $z$ denotes a variable in the upper\nhalf plane, and $q=e^{2\\pi i z}$, write the Fourier expansion of $g$ as $g(z)=\\sum a(n,g)q^n$. Then the $L$-function $L(s, g) = \\sum a(n,g) n^{-s}$ of $g$ has the formal Euler product expansion\n\\[L(s, g) = \\prod_q(1-\\alpha_q q^{-s})^{-1}(1-\\beta_q q^{-s})^{-1},\\]where the product is taken over all prime numbers $q$, and $\\alpha_q, \\beta_q$ are certain complex numbers. For $q \\nmid M$, we have $\\alpha_q\\beta_q=q^{k-1}$, but if $q\\mid M$, then one or both \n$\\alpha_q, \\beta_q$ is zero. In fact, since the character of $g$ is trivial, the formulae of Miyake \\cite[Theorem 4.6.17]{miy89} for $q\\neq p$ show\nthat if $M$ is divisible by precisely the first power of $q$, then $\\alpha_q^2= q^{k-2}$ and $\\beta_q=0$, while if $q^2\\vert M$, \nthen both are zero. In the special case that $q=p$, we have $\\alpha_p\\neq 0$, $\\beta_p=0$, \nand $\\alpha_p$ is a $\\mathfrak{p}$-adic unit. \n\n\\par We follow the classical\nconventions when speaking of Hecke operators acting on modular forms. Let $m$ be any positive integer. For $q\\nmid m$ (resp. $q\\mid m$) the Hecke operator $T_q$ (resp. $U_q$) of level $m$ corresponds\nto the \\emph{right} action of the double coset \n$\\Gamma_0(m)\\begin{pmatrix} q & 0 \\\\ 0 & 1\\end{pmatrix}\\Gamma_0(m)$, acting via the usual slash operator. With these normalizations, the eigenvalue\nof $T_q$ on $f$ is $a_q = \\alpha_q + \\beta_q$, for $q\\nmid M$, and we have $\\alpha_q\\beta_q= q^{k-1}$. If $m$ is divisible by precisely the first power of $q$,\nthen the eigenvalue of $U_q$ is $\\alpha_q=\\pm \\sqrt{q^{k-2}}$, and if $q^2\\vert M$, then the eigenvalue of $U_q$ is zero.\n\\par We set the Petersson product of modular forms $v, w$ of weight $k\\geq 2$\non $\\Gamma_0(m)$ (at least one of which is cuspidal) to be as follows:\n$$\\langle v, w\\rangle_m = \\int_{B(m)}v(z)\\overline{w(z)} y^{k-2} dx dy,$$ and the integral is taken over a fundamental domain $B(m)$ for $\\Gamma_0(m)$. The Hecke operators $T_q$ are self-adjoint with respect to the Petersson inner-product on $\\Gamma_0(m)$. Write $W_m$ for\nthe operator on modular forms of level $m$ induced\nby the action of the matrix $\\begin{pmatrix} 0 & 1 \\\\ -m & 0\\end{pmatrix}$. Recall that\nthe adjoint of $U_q$ acting on cuspforms of level $m$ is the operator $U_q^*$ given by $W_m U_q W_m$. Note that the matrix $W_m$\nnormalizes $\\Gamma_0(m)$, and that the adjoint of $U_q$ depends on the level, although $U_q$ itself does not.\n\n\\par Let $\\Sigma$ be a finite set of primes, and define the modular form \n \\[f(z) :=\\sum_{(n,\\Sigma)=1}a(n,g) q^n,\\] where the sum is restricted to indices $n$ that \n are indivisible by each prime in $\\Sigma$. Then one has the following formula for the L-function\n \\begin{equation}\n \\label{deg-two-product}\n L(f,s) =\\sum_{(n,\\Sigma)=1}a(n,g) n^{-s} = \\left(\\prod_{(q,\\Sigma)=1}(1-\\alpha_q q^{-s})(1-\\beta_q q^{-s})\\right)^{-1},\n \\end{equation}\nwhere the product is taken over primes away from $\\Sigma$. The modular form $f$ has level\nat most $M\\prod_{q\\in \\Sigma} q^2$.\n\\begin{assumption}\\label{assumptions on Sigma}\nWe make the following assumptions on $\\Sigma$:\n\\begin{itemize}\n\\item $2\\in \\Sigma$;\n\\item $p\\notin \\Sigma$;\n\\item if $q\\in\\Sigma$, then $q^2\\nmid M$; and \n\\item if $q$ exactly divides $M$, then $q\\in \\Sigma$.\n\\end{itemize}\n\\end{assumption}\n\nAs in the introduction, $\\Sigma$ contains the prime $2$, together with all primes $q$ that divide $M$ to precisely the first power, together with\nother primes that do not divide $M$. If $q^2$ divides $M$, then $q\\notin\\Sigma$, unless $q=2$.\n In our applications, we will have to\nchoose $\\Sigma=\\Sigma_i$ for $g_i$ in a way that depends on the congruent form $g_{i+1}$, where we read the indices modulo 2.\nIn particular, $\\Sigma_1$ and $\\Sigma_2$ may be different. For now, we simply fix one form $g=g_i$. \nThroughout, $\\psi$ is an even and non-quadratic character of conductor $c_\\psi$ coprime to $N$. It follows from the definition of $\\Sigma$\nand the description of $\\alpha_q, \\beta_q$ given above \nthat we have\n\\begin{Lemma} Under the assumptions above, the modular form $f$ has level $N$ given by $N = M \\cdot \\prod_{q\\in\\Sigma} q^{e(q)}$, where\nfor $q$ odd we have\n$e(q)=2$ if $q\\nmid M$, and $e(q) =1$ if $M$ is divisible by precisely the first power of $q$. For $q=2$, we have $e_2=2$ if $M$ is odd,\n$e_2=1$ if $2$ exactly divides $M$, and $e_2=0$ if $4 \\vert M$. \n\\end{Lemma}\n\nAssumption \\ref{assumptions on Sigma} leads to the following elementary consequences:\n\\begin{enumerate}\n\\item $N=pN_0$, where $(N_0, p)=1$;\n\\item $4\\vert N$;\n\\item $f$ is an eigenvector\nof the Hecke operators $T_q$ for $q\\nmid N$ and for $U_q$ when $q\\vert N$;\n\\item the eigenvalue $\\alpha_p$ of the $U_p$ operator on $f$ is a $\\mathfrak{p}$-adic unit;\n\\item for $q\\vert N_0, q\\neq p$, the eigenvalue of $U_q$ on $f$ is zero;\n and\n\\item the product in (\\ref{deg-two-product}) may be taken over $q$ away from $N_0$.\n\\end{enumerate}\nThe final statement follows from the fact that if $q\\neq p$ divides $N_0$ and $q\\notin\\Sigma$, then $q^2\\vert M$, so that $\\alpha_q=\\beta_q=0$.\nIf necessary, we enlarge $K$ to \ncontain the Fourier coefficients of $g$ as well as the number $\\alpha_p$. As before, we write\n ${\\mathcal O}$ to denote the completion of the ring of integers of $K$ at the prime ${\\mathfrak p}$. \n\n\\subsection{Dirichlet series, the naive symmetric square, and the Petersson product formula}\n\nLet $\\chi$ be a Dirichlet character of the form $\\psi\\eta$, where $\\psi$ has conductor away from $N$, and $\\eta$ has $p$-power conductor.\nLet $f$ be as defined previously. Then the naive $\\chi$-twisted symmetric square $L$-function of $f$ is as follows:\n\\begin{equation}\n\\label{naive-product}\nD_f(\\chi, s) = \\prod_{q\\nmid N_0} \\left( (1-\\chi(q)\\alpha_q\\beta_q q^{-s})(1-\\chi(q)\\beta_q^2q^{-s})(1-\\chi(q)\\alpha_q^2q^{-s})\\right)^{-1}.\n\\end{equation}\n\nIn the language of the introduction,\n$D_f(\\chi, s) $ is Shimura's $D(s, f, \\chi) = \\mathscr{L}_\\Sigma(r_g\\otimes\\chi, s) = \\mathscr{L}_{S_0}(r_g\\otimes\\chi, s)$, where\n$S_0=S\\backslash\\{p\\}$, and $S$ is the set of primes dividing $N$. Here we are repeatedly using the fact that the degree\n2 Euler product associated to $f$ is trivial at all $q\\neq p$ such that $q\\vert N$, and that the same is true for $g$ at primes $q\\in S_0\\backslash\\Sigma$.\nWe have mentioned this fact many times, but it is absolutely crucial and bears repeating: without this fact, we would not get any equality between\n $\\mathscr{L}_\\Sigma(r_g\\otimes\\chi, s)$ and $\\mathscr{L}_{S_0}(r_g\\otimes\\chi, s)$ and\n Shimura's $D(s, f, \\chi)$, and so Shimura's formulae would not apply. Indeed, the functions $\\mathscr{L}_\\Sigma(r_g\\otimes\\chi, s)$ and $\\mathscr{L}_{S_0}(r_g\\otimes\\chi, s)$ have trivial Euler factors at primes in $\\Sigma$ and $S_0$ respectively, but Shimura's L-function has trivial Euler factor at $q\\vert N_0$ if and only\n if $\\alpha_q=\\beta_q=0$ (recall that $\\chi$ is unramified at primes dividing $N_0$). If the Euler factors of $g$ at $q\\in S_0\\backslash\\Sigma$\nwere nontrivial, then we would not have $\\mathscr{L}_\\Sigma(r_g\\otimes\\chi, s) = \\mathscr{L}_{S_0}(r_g\\otimes\\chi, s)$, and if the Euler factors of \n$f$ at the primes in $S$ were nontrivial, we would not have $D(s, f, \\chi) = \\mathscr{L}_{S_0}(r_g\\otimes\\chi, s)$. \n\n\nFor the present, we concentrate on the naive L-function $D_f(\\chi, s)$, and show\nthat its special values at critical points behave well with respect to congruences. \n\n\\par Let \n$G({\\chi})$ denote the Gauss sum of $\\chi$.\nThen the quantity\n\\begin{equation}\nD_f(\\chi, s)^\\text{alg} = \\frac{D_f(\\chi, s)}{\\pi^{k-1}\\langle f, f\\rangle_N}\\cdot \\frac{G(\\overline{\\chi})}{(2\\pi i)^{s-k+1}}\n\\end{equation} is algebraic when $s=n$ is an integer in the range\n$1\\leq n\\leq k-1$\nsatisfying $(-1)^n=-\\chi(-1)$. This is well-known, see \\cite[Theorem 2.2.3]{lz16}, for the present formulation; the result goes back to\nShimura, whose method was elaborated by Schmidt \\cite{schmidt86}, \\cite{schmidt88}, and Sturm \\cite{sturm}.\n If $(-1)^n=-\\chi(-1)$ and $1\\leq n\\leq k-1$, then we say that $n$ is critical.\nWe remark that the functional equation for the primitive symmetric square L-function leads to similar algebraicity results\nfor $D_f(\\chi, s)^{\\op{alg}}$ for integer values of $s$ in the range $k\\leq s\\leq 2k-2$; we will not need these results here.\nNote that the quantity $D_f(\\chi, s)^\\text{alg}$ may be zero, even at critical values, since we are dealing with imprimitive \n$L$-functions. Furthermore, the algebraic quanitites above are not necessarily integral. \nWe make the following assumption, to keep the notation and book-keeping simple:\n\n\\begin{itemize}\n\\item The character $\\chi=\\psi \\eta$, where $\\eta$ is a \\emph{nontrivial} even character of $p$-power conductor.\n\\item $s=n$ is an odd integer with $1\\leq n \\leq k-1$ in the algebraicity formula above. \n\\end{itemize}\n\nThe explicit formulae we will need for interpolation and congruences originate in \\cite{shimura-holo}. They\nare cited in many different forms in the references \\cite{schmidt86}, \\cite{schmidt88}, and \\cite{ros16}, but \neach of these references adopt slightly different normalizations and conventions, so we are forced to make a choice.\nWe have selected to follow \\cite{schmidt86}, since it is relatively easy to compare the formulae given there to those\noriginally given by Shimura, whose work remains the basic reference. \n\nThus, define\n\\begin{equation}\\theta_\\chi(z) = \\sum_j \\chi(j) \\text{exp}(2\\pi i j^2 z), \n\\end{equation} which is a modular form of weight $1\/2$ and level $4c_\\chi^2$. \nSetting $\\omega:=\\chi\\left(\\frac{-1}{\\cdot}\\right)^k$, we let $$\\Phi(z, \\chi, s) = L_{N}(\\chi^2, 2s+2-2k)E(z, s+2-2k, 1-2k, \\omega)$$ denote the \nEisenstein series as defined in \\cite[p.210]{schmidt86}. Note that we have already imposed $4\\vert N$ as part of our assumption on the level of $f$. \n\n\n\n\nIt turns out that $\\theta_\\chi(z)\\Phi(z, \\chi, s)$ is a (non-holomorphic) modular form of weight $k$, trivial character, and level \n $N_\\chi:=\\text{lcm}(N, c_\\chi^2)$. Here we use the fact that $4\\vert N$, by construction. Recall that the level $N$ of $f$ is divisible by precisely the first power of $p$, since $f$ is assumed to be $\\mathfrak{p}$-stabilized.\nWe shall write $c_\\eta=p^{m_\\chi}=p^{m_\\eta}$ for the $p$-part of the level of $\\chi=\\psi\\eta$. Since $\\eta\\neq 1$\nwe have $m_\\chi\\neq 0$ and $N_\\chi=\nN_0c_\\psi^2p^{2m_\\chi}$ where we recall that $N_0$ is the prime to $p$-part of $N$.\n\nConsider an odd integer $n$ in the range\n$1\\leq n\\leq k-1$, and set $H_{\\overline\\chi}(n) := \\theta_{\\overline\\chi}(z)\\Phi(z,\\overline\\chi, n)$. Shimura has\n shown that $H_{\\overline\\chi}(n)$ is a nearly holomorphic modular form of level\n$N_\\chi$, weight $k$, and trivial character. In fact, one has the following formula (see equation (1.5) in \\cite{shimura-holo}):\n\\begin{equation}\\label{sturm relation}\n(4\\pi)^{-s\/2}\\Gamma(s\/2) D_f(\\chi, s) = \\langle f, \\theta_{\\overline\\chi}(z)\\Phi(z,\\overline\\chi, s)\\rangle_{N_\\chi}.\n\\end{equation}\n\n\nObserve\nnow that $f$ has level $N$, while $H_{\\overline\\chi}(n)$ satisfies a transformation property with respect to the group \n$\\Gamma_0(N_\\chi)$. According to our assumption, $\\chi$ has conductor $c_\\psi p^r$, for some $r$, and $c_\\psi$ is relatively\nprime to $N$. Thus the level of $f$ and the level of $H_{\\overline\\chi}(n)$ differ only by a power of $p$, and the primes that\ndivide the conductor $c_\\psi$ of $\\psi$. Our goal is to bring $H_{\\overline\\chi}(n)$ down to level $N$ by taking a trace, and verify\nthat we can retain\ncontrol of integrality of the Fourier coefficients. \n\nWe start by dealing with the powers of $p$. In this we reproduce the method of Schmidt.\nLet $N_\\psi=N_0pc_\\psi^2$, and let $T_\\eta$ denote the trace\noperator that takes modular forms on $\\Gamma_0(N_\\chi)$ down to $\\Gamma_0(N_\\psi)$, normalized as in Schmidt \\cite{schmidt86}.\nWe remark here that the\ntrace operator is defined purely in terms of matrices, and can be applied to the non-holomorphic form $H_{\\overline\\chi}(n)$. \n\nIt follows from \\cite[Lemma 3.10]{CS} (which is done for weight $2$), or the calculation in the middle \nof \\cite[p.217]{schmidt86}, that the following relation is satisfied $$p^{(2m_\\chi-1)(k\/2-1)}T_\\eta\\circ W_{N_\\psi}=\nW_{N_\\chi}\\circ U_p^{2m_\\chi -1}.$$ As a result, we find that $H_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m_\\chi-1}$ is of level $N_\\psi$. \nIt follows further that for any $m\\geq m_\\chi$,\nthat $H_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m -1}$ is also a modular form of level $N_\\psi$, since the $U_p$ operator at level $N_\\chi$ is \ngiven by the same matrices as $U_p$ at level $N_\\psi$, and $U_p$\nstabilizes the space of forms of level $N_\\psi$. Note that \\[H_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m -1}= \nH_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m_\\chi-1} \\circ U_p^{2(m-m_\\chi)-1},\\] \nand therefore from \\eqref{sturm relation} we obtain the relations\n\\begin{align*}\n(4\\pi)^{-n\/2}\\Gamma(n\/2) D_f(\\chi,n) = & \\langle f, H_{\\overline\\chi}(n) \\rangle _{N_\\chi} \\\\\n= &\\langle f, H_{\\overline\\chi}(n)\\circ T_\\eta \\rangle _{N_\\psi} \\\\\n= & \\langle f\\circ W_{N_\\psi}, H_{\\overline\\chi}(n)\\circ T_\\eta\\circ W_{N_\\psi}\\rangle_{N_\\psi}\\\\\n=& p^{-(2m_\\chi-1)(k\/2-1)}\\langle f\\circ W_{N_\\psi}, H_{\\overline\\chi}(n) \\circ W_{N_\\chi}\\circ U_p^{2m_\\chi -1}\\rangle_{N_\\psi},\n\\end{align*}\n\nand\n\n\\begin{align*}\n\\langle f\\circ W_{N_\\psi}, H_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m-1}\\rangle_{N_\\psi} \n= &\\langle f\\circ W_{N_\\psi}, H_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m_\\chi-1} \\circ U_p^{2(m-m_\\chi)}\\rangle_{N_\\psi} \\\\ \n= & \\langle f \\circ W_{N_\\psi}\\circ (U_p^*)^{2(m-m_\\chi)}, H_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m_\\chi-1}\\rangle_{N_\\psi} \\\\\n= & \\langle f\\circ U_p^{2(m- m_\\chi)}\\circ W_{N_\\psi}, H_{\\overline\\chi}(n)\\circ W_{N_\\chi} \\circ U_p^{2m_\\chi-1}\\rangle _{N_\\psi}\\\\\n= & \\alpha_p^{2(m-m_\\chi)} \\langle f\\circ W_{N_\\psi}, H_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m_\\chi-1}\\rangle _{N_\\psi}.\n\\end{align*}\n\nIn the second calculation above, we have used the fact that adjoint of $U_p$ at level $N_\\psi$ is given by $U_p^* = W_{N_\\psi}\\circ U_p\\circ W_{N_\\psi}$. \nPutting together the strings of equalities above, we conclude that\n \\begin{equation}\n \\label{petersson-formula}\n \\begin{split}\n&\\frac{\\Gamma(n\/2)}{(4\\pi)^{n\/2}}p^{(2m_\\chi-1)(k\/2-1)} D_f(\\chi,n) \\\\\n= & \\alpha_p^{2(m_\\chi-m)} \\langle f, H_{\\overline{\\chi}}(n)\\circ W_{N_\\chi}\\circ U_p^{2m-1}\\circ W_{N_\\psi}\\rangle_{N_\\psi}\n\\end{split}\n\\end{equation}\nfor any $m\\geq m_\\chi$. \n\n\\par We will see below that the nearly holomorphic modular form on the right hand side of the formula above may be replaced with\na \\emph{holomorphic} form $\\mathcal{H}_{\\overline\\chi}(n)$ of level $N_\\psi$, without changing the value of the inner product. Furthermore, we shall\nsee that $\\mathcal{H}_{\\overline\\chi}(n)$ has $\\mathfrak{p}$-integral Fourier coefficients. Assuming this, we shall take the (twisted) \ntrace of $\\mathcal{H}_{\\overline\\chi}(n)$ down to level $N$ to get our final result. This trace is much easier to deal with since $N_\\psi$ and and $N$ differ\nonly by primes away from $p$. Let $t_\\psi$ denote the trace operator from level $N_\\psi$ to level $N$. Let $W_{c_\\psi^2}$ denote the Atkin-Lehner \noperator acting on modular forms of level $N_\\psi = Nc_\\psi^2$,\nas defined in \\cite{al}, page 138, just before Lemma 8. \nWe note that $(N, c^2_\\psi) =1$, so this operator is indeed defined. \nDefine\n$T_\\psi: S_k(N_\\psi) \\rightarrow S_k(N)$ by \n$$T_\\psi: h\\mapsto h\\circ W_{c_\\psi^2}\\circ t_\\psi.$$ \n The key lemma is the following. It states\nthat the trace operator $T_\\psi$ preserves integrality. \n\n\\begin{Lemma}\n\\label{tame-trace}\nSuppose $\\phi$ is a holomorphic modular form of level $N_\\psi$ and weight $k$. Suppose that $\\phi$ has Fourier\nexpansion $\\phi = \\sum a(n, \\phi)q^n$ with $\\mathfrak{p}$-integral coefficients $a(n, \\phi)$. Then $T_\\psi(\\phi) $ has $\\mathfrak{p}$-integral Fourier\ncoeffiicients as well. \n\\end{Lemma}\n\n\\begin{proof} The fact that $W_{c_\\psi^2}$ preserves integrality is Theorem A.1 in the appendix by Conrad to \\cite{pras09}.\nAs for $t_\\psi$, the proof is more or less standard, so we merely sketch the argument. \nThe trace is given by $\\phi\\mapsto \\sum_\\gamma\\phi\\circ\\gamma$, where\n$\\gamma$ runs over a set of coset representatives of $\\Gamma_0(N_\\psi)\\backslash\\Gamma_0(N)$. By definition of $N$ we have \n$\\Gamma_0(N)\\subset \\Gamma_0(p)$. Thus the cusp $s=\\gamma(\\infty)$ is one of Hida's `unramified' cusps, and it is well-known that if $\\phi$ has\nintegral $q$-expansion at $\\infty$ then it has integral $q$-expansion at $s$. The key point is that both $\\infty$ and $s$ reduce modulo $\\mathfrak{p}$ \nto points on the same component of the special fibre of $X_0(N_\\psi)$, and thus if a modular form vanishes identically in a formal neighbourhood of one\ncusp, it must vanish on the whole component, and hence the expansion is zero about the other cusp as well.\nThus $t_\\psi$ preserves integrality as well. The reader may consult\n\\cite{hida86}, Section 1, or \n\\cite{hida88}, page 11, for a fuller discussion.\n\\end{proof}\n\nOur next goal is therefore to analyze the form $H_{\\overline{\\chi}}(n)\\circ W_{N_\\chi}\\circ U_p^{2m-1}\\circ W_{N_\\psi}$ \nand show how to replace it with something holomorphic and integral. This computation will occupy the next section. \n\\subsection{Holomorphic and ordinary projectors}\n\n The classical method of going from a nearly holomorphic form to something holomorphic, and which is adopted\nin \\cite{CS}, \\cite{schmidt86}, \\cite{schmidt88}, is to \npass from $H_\\chi$ to its so-called holomorphic projection. This is a bit complicated, since the formulae giving the holomorphic \nprojection of a nearly holomorphic form involve\nfactorials and binomial coefficients, and one cannot easily control the denominators. This is why the results of \n\\cite{schmidt86}, \\cite{schmidt88} are only stated up to some unspecified rational constant. \n\nOne of the main contributions of this paper is a solution to this problem, using $p$-adic methods. In the case we have at hand, the form $f$\nis \\emph{ordinary}, and we can replace the nearly holomorphic form with a certain \\emph{ordinary} projection, without losing\nany information. This has the significant advantage that the ordinary\nprojector is denominator-free. In view of Hida's control theorems for ordinary forms, the ordinary projection is automatically holomorphic.\nWe shall follow this alternative path, but to complete the journey, we have to make a computation of Fourier coefficients. \n\nWe have to compute the Fourier expansion of $H_{{\\chi}}(n)\\circ W_{N_\\chi}$. Since f $H_{{\\chi}}(n)$ is a product of an\nnearly holomorphic Eisenstein series $\\Phi(z, \\chi, n)$ of weight $k-1\/2$ and a theta series $\\theta_\\chi$ of weight $1\/2$, we\nwork out the expansions of these two first, starting with the Eisenstein series.\n\nFollowing \\cite{schmidt86}, page 213,and Shimura, \\cite{shimura-holo}, Section 3, page 86, let us write \n$$\\Phi\\left(\\frac{-1}{N_{\\chi} z}, \\chi, n\\right) \\cdot (\\sqrt{N_\\chi}z)^{1\/2-k}= \\sum_{j=0}^{(n-1)\/2}\\sum_{\\nu=0}^{\\infty} (4\\pi y)^{-j} d_{j, \\nu} q^\\nu.$$\n\n\\begin{Lemma}\n\\label{eisen-fourier} If $\\chi$ is ramified at $p$, the quantities $\\frac{\\Gamma((n+1)\/2)}{\\pi^{(1+n)\/2}} p^{m_\\chi(3-2k+2n)\/2}d_{j,\\nu}$ are algebraic\nand $\\mathfrak{p}$-integral.\n\\end{Lemma}\n\n\\begin{proof} The formulae for the $d_{j,\\nu}$ may be deduced from those on pages 212-213 of \\cite{schmidt86}, whose $n$ is our $\\nu$, and whose\n$m$ is our $n$. We remark that there appears to be a small mistake in\n in the power of $i$ in the formula for $\\tau_n$ on the top of page 213; by comparision with the formula on page 225 of \\cite{sturm}, \n it should be $i^{-k+1\/2}$.\n\nFor $\\nu>0$, one obtains\n$$d_{j, \\nu} = (-2i)^{(k-1\/2)}\\cdot\\pi^{\\frac{n+1}{2}}\\cdot \\nu^{\\frac{n-1}{2}} \\cdot B_j \\cdot {\\frac{n+1}{2}\\choose j}\\cdot N_\\chi^{(2k-2n-3)\/4}\\cdot L_{N_\\chi}(n+1-k,\\omega_\\nu)\\cdot\\beta(\\nu, n+2 -2k),$$\nwhere $B_j=\\frac{\\Gamma(n\/2+1-k+j)}{\\Gamma(\\frac{n+1}{2})\\cdot\\Gamma(n\/2+1-k)}\\in\\mathbb{Q}$. The definition of $\\beta$ may be \nfound in \\cite{schmidt86}, page 212. As mentioned above, our formula has a factor of $(-i)^{k-1\/2}$, while \\cite{schmidt86}\nhas $(-1)^{k-1}$; our formula here agrees with the one in the later paper \\cite{schmidt88}, page 614 (where the normalization and \nnotations are rather different). Note also that $\\beta$ depends on $\\chi$. The character $\\omega_\\nu$ is defined on page 212\nof \\cite{schmidt86}, and $\\omega$ on page 206. \n\nAs for the constant term, one has $d_{j,0}=0$ unless $j=(n-1)\/2$, in which case one has \n$$d_{(n-1)\/2, 0} = (-2i)^{(k-1\/2)}\\cdot\\pi^{\\frac{m+1}{2}}\\cdot B_j \\cdot N_\\chi^{(2k-2n-3)\/4}\\cdot L_{N_\\chi}(2n+2-2k,\\omega^2)$$\n\nIt is clear from the formula\nfor $B_j$ that $\\Gamma((n+1\/2))B_j$ is a rational integer, and one knows from properties of Kubota-Leopoldt $p$-adic L-functions\nthat $L(\\omega_\\nu, n+1-k)$ is $\\mathfrak{p}$-integral once the character $\\omega_\\nu$ has conductor divisible by $p$. The result\nfollows upon clearing the powers of $p$ coming from $N_\\chi = N_0c_\\psi^2p^{2m_\\chi}$. \n \\end{proof}\n\n\nOne has now to compute the Fourier expansion of the quantity $\\theta_\\chi(-1\/N_\\chi z)\\cdot (\\sqrt{N_\\chi} z)^{-1\/2}$. \nHere the exponent comes from the fact that $\\theta_\\chi$ is a form of weight $1\/2$. The requisite formula may be found\nin \\cite{shimura-half}, Proposition 2.1, and we record the result here. \nRecall the notations: $N=N_0p$ is an integer divisible by precisely\nthe first power of $p$, and $\\chi$ is a character of conductor $c_\\psi p^{m_\\chi}$. Recall that $c_\\psi$ satisfies $(c_\\psi, 2Mp)=1$. \nThe integer $N_\\chi$ is given by $N_\\chi= N_0c_\\psi^2p^{2m_\\chi}$ if $\\chi$ is ramified. Furthermore,\n$N$ is divisible by $4$. We let $N'= N_\\chi\/4c_\\chi^2=N_0\/4$. \n\n\n\\begin{Lemma} \n\\label{theta-fourier}\nSuppose that $\\chi$ is ramified. \nWe have $\\theta_{\\chi}(-1\/N_\\chi z)\\cdot (\\sqrt{N_\\chi} z)^{-1\/2} = \\theta_{\\overline{\\chi}}(N'z)\\cdot \n\\frac{g(\\chi)}{ \\sqrt{c_\\psi p^{m_\\chi}}}\\cdot i^{3\/2}\\cdot (N^\\prime)^{1\/4}$.\n\\end{Lemma}\n\n\\begin{proof} See \\cite{shimura-half}, Proposition 2.2.\n \\end{proof}\n\n\\begin{Remark} It is clear from the formulae for the $d_{j,\\nu}$in terms of Dirichlet L-functions\n that the forms $\\Phi$ occur in analytic families -- simply replace the Dirichlet L-functions with the Kubota-Leopoldt \n versions. Since this is obviously\nso for the theta functions $\\theta_\\chi$, one guesses immediately that some kind of $p$-adic L-functions should exist, simply\nby taking a suitable pairing with $\\theta_\\chi\\Phi(z, \\chi, n)$. \n\\end{Remark}\n\n\nRecall that we have put $\\chi = \\psi\\eta$, where $\\psi$ has conductor $c_\\psi$, and $\\eta$ has $p$-power \nconductor $p^{m_\\chi}$, with $m_\\chi=m_\\chi$. For the purposes of $p$-adic L-functions, we must regard the character\n$\\psi$ as fixed, and let $\\eta$ vary. \n\n\n\\begin{Corollary} \n\\label{first-itegrality-formula}\nSuppose that $\\eta$ is ramified. Then \n$$\\tilde{H}_\\chi(n)= \\frac{\\Gamma((n+1)\/2)}{\\pi^{(1+n)\/2}} p^{m_\\chi(3-2k+2n)\/2} \\cdot \\frac{\\sqrt{c_\\psi p^{m_\\chi}}}{g(\\chi)} \\cdot H_{\\chi}(n)\\circ W_{N_\\chi}$$\nis a nearly holomorphic form of level $N_{\\chi}$ with $\\mathfrak{p}$-integral Fourier coefficients.\n\\end{Corollary}\n\n\\begin{proof} This is obvious, from the formulae above. \n\\end{proof}\n\n\\begin{Remark} We have not given any formula for the case where $\\eta$ is trivial, although it may be done just as above. The \nexact formulae are slightly different, and we will not need them. Since we are assuming the existence of imprimitive $p$-adic\nL-functions, it suffices, for the purpose of integrality, to show that almost all the values are integral.\n\\end{Remark}\n\nNow we want to pass from $\\tilde{H}_\\chi(n)$ to something holomorphic, while preserving integrality. \nThus the following proposition represents a key contribution of the present work.\n\n\\begin{Proposition}\\label{ordinary holomorphic proj} Let $e$ denote Hida's ordinary projection operator, acting on $M_k(N_\\chi, {\\mathcal O})\\otimes\\bar{\\Q}$. Then \n$\\tilde{H}_\\chi(n)^{\\text{hol}}\\circ e\\in M_k(N_\\psi)\\otimes\\bar{\\Q}$ has ${\\mathfrak p}$-integral Fourier coefficients and level $N_\\psi$. Here $\\tilde{H}_\\chi(n)^{\\text{hol}}$ denotes the holomorphic\nprojection of $\\tilde{H}_\\chi(n)$. \n\\end{Proposition}\n\n\n\n\\begin{proof} The easiest way to prove the Proposition is to use the geometric theory of nearly ordinary \nand nearly holomorphic modular forms, as developed by Urban \\cite{urban12}. A resum\u00e9 of Urban's work\nas adapted to this setting may be found \nin \\cite{ros16}. However, we give a proof along classical lines, for the convenience of the reader. \nWrite $\\tilde{H}_\\chi(n) = \\tilde{H}_\\chi(n)^{\\text{hol}} + \\sum h_i$, where each $h_i$ is in the image of a Maass-Shimura\ndifferential operator. It follows from the explicit formulae for the differential operators and the formulae for the Fourier\ncoefficients of $\\tilde{H}_\\chi(n)$ that $\\tilde{H}_\\chi(n)^{\\text{hol}}$ and each $h_i$ has algebraic Fourier coefficients. \nLet $m$ denote a positive integer, divisible by $p-1$, and consider \n$\\tilde{H}_\\chi(n)\\circ U_p^{2m-1}$. Then\none has $$\\tilde{H}_\\chi(n) \\circ U_p^{2m-1} = \\tilde{H}_\\chi(n)^\\text{hol}\\circ U_p^{2m-1} + \\sum h_i\\circ U_p^{2m-1}.$$ It is well known that $U_p$ multiplies\neach $h_i$ by a power of $p$ (For example, see \\cite{ros16}), formula above Lemma 2.3) Thus, the quantity on the right converges to $\\tilde{H}_\\chi(n)^{\\text{hol}}\\circ e$, as $m$\nincreases, the convergence being assured by the existence of the ordinary projector.\nOn the other hand, the quantity on the left is integral. This proves the proposition. The fact that the level comes down to $N_\\psi$ arises from the fact\nnoted above that $U_p^m$ already brings the level down to level $N_\\psi$ for any $m\\geq m_\\chi$, as noted in the course of proving (\\ref{petersson-formula}). \n\\end{proof}\n\n\nFinally, we take the trace of the integral and holomorphic form $\\tilde{H}_\\chi(n)^{\\text{hol}}\\circ e$ all the way down to level $N$. Since $H_\\chi(n)$ is not necessarily cuspidal,\nwe shall write $M_k$ to denote spaces of all modular forms. With this notation, we \ndefine\n\\begin{equation}\n\\label{hchi}\n\\mathcal{H}_\\chi(n) = T_\\psi(\\tilde{H}_\\chi(n)^{\\text{hol}}\\circ e)\\in M_k(N, {\\mathcal O})\\otimes\\bar{\\Q}\n\\end{equation}\nand\n\\begin{equation}\n\\label{hchim}\n\\mathcal{H}_\\chi^m(n) = T_\\psi(\\tilde{H}_\\chi(n)^{\\text{hol}}\\circ U_p^{2m-1})\\in M_k(N, {\\mathcal O})\\otimes\\bar{\\Q}.\n\\end{equation}\n\nThen Lemma \\ref{tame-trace}, and the proof of Proposition \\ref{ordinary holomorphic proj}, give\n\n\\begin{Proposition} The modular form $\\mathcal{H}_\\chi(n) $ has $\\mathfrak{p}$-integral Fourier coefficients.\nThe modular forms $\\mathcal{H}_\\chi^m(n)$ have $\\mathfrak{p}$-integral Fourier coefficients for all $m$ sufficiently large.\n\\end{Proposition}\n\nIn the next section, we shall see how to compute the inner product of $f$ with $\\mathcal{H}_\\chi(n)$ and derive integrality properties for the\nspecial values of $D_f(\\chi, s)$. \n\n\\subsection{Algebraic and analytic inner products}\n\nOur goal is to use (\\ref{petersson-formula}) to describe the imprimitive $p$-adic L-function, and show that it behaves \nwell with respect to congruences. In\norder to do this, we need to normalize our L-functions so that the algebraic parts are $\\mathfrak{p}$-integral. We accomplish this by combining\nthe inner product formula with a certain algebraic incarnation of the Petersson inner product formula due to Hida. \n\nLet $S_k(N, \\mathbb{Z})$ denote the space of modular\nforms of weight $k$ with rational integral Fourier coefficients. If $R$ is any ring, set $S_k(N, R)= S_k(\\Gamma_0(N), \\mathbb{Z})\\otimes R$.\n Let ${\\mathbf{T}}$ denote the ring generated by\nthe Hecke operators $T_q, U_q, U_p$ acting on $S_k(N, \\mathcal{O})$, and set ${\\mathbf{T}}(R)={\\mathbf{T}}\\otimes R$. \n Recall that our convention is that Hecke operators act on the \n\\emph{right}.\n\nThen the eigenform $f$ determines a ring homomorphism $\\mathbf{T}\\rightarrow \\mathcal{O}$, sending a Hecke-operator\n$T\\in \\mathbf{T}$ to the $T$-eigenvalue of $f$. \nDefine $\\mathcal{P}_f$ to denote the kernel of this homorphism. There is a unique maximal ideal $\\mathfrak{m}$ \nof $\\mathbf{T}$ that contains $\\mathcal{P}_f$ and the maximal ideal of $\\mathcal{O}$. In a sense, the maximal \nideal $\\mathfrak{m}$ determines a residual Galois representation, which we shall assume throughout to be absolutely irreducible.\n\\par There \nis a canonical duality of $\\mathbf{T}_\\mathfrak{m}$-modules \nbetween $S_k(N, \\mathcal{O})_\\mathfrak{m}$ and $\\mathbf{T}_\\mathfrak{m}$ defined by the form\n\\[S_k(N, \\mathcal{O})_\\mathfrak{m}\\times \\mathbf{T}_\\mathfrak{m}\\rightarrow \\mathcal{O}\\]$(s, t)\\mapsto a(1, s|t)\\in \\mathcal{O}$ which identifies \n$\\mathbf{T}_\\mathfrak{m}$ with $\\text{Hom}_{\\mathcal{O}}(S_k(N, \\mathcal{O})_\\mathfrak{m}, \\mathcal{O})$. Here\n$s \\in S_k(N,\\mathcal{O})$ (being an $\\mathcal{O}$-linear combination of elements in $S_k(N,\\mathbb{Z})$) is given by the Fourier expansion $s = \\sum a(n, s) q^n$, with $a(n,s)\\in \\mathcal{O}$.\n\nTo proceed further, we assume that the Galois representation associated to $f$ at $\\mathfrak{m}$ is irreducible,\nordinary and $p$-distinguished. It is well-known\nthat under these conditions that $\\mathbf{T}_\\mathfrak{m}$ is Gorenstein, and isomorphic as a \n$\\mathbf{T}_\\mathfrak{m}$-module to $\\text{Hom}(\\mathbf{T}_\\mathfrak{m}, \\mathcal{O})$, as left modules.\nA proof may be found in \\cite[Theorem 2.1 and Corollary 2 (p. 482)]{wil95}.\n\n\\par Thus, the space $S_k(N, \\mathcal{O})_\\mathfrak{m}$ is equipped with both a left and right action of Hecke operators; one coming from the classically defined slash action of Hecke operators on modular forms, and the other the abstract left action obtained from the \nabstract Gorenstein isomorphism. In fact, the left action of ${\\mathbf{T}}_{\\mathfrak{m}}$ on $\\op{Hom}({\\mathbf{T}}_{\\mathfrak{m}}, \\mathcal{O})$ coincides with the usual right action of ${\\mathbf{T}}_{\\mathfrak{m}}$ on $S_k(N, \\mathcal{O})_{\\mathfrak{m}}$. Indeed, the isomorphism \n\\[{\\mathbf{T}}_{\\mathfrak{m}}\\xrightarrow{\\sim} \\op{Hom}({\\mathbf{T}}_{\\mathfrak{m}}, \\mathcal{O})\\] is an isomorphism of ${\\mathbf{T}}_{\\mathfrak{m}}$-modules, so the two actions are seen to coincide. One deduces that there is a duality pairing\n\\begin{equation}\n\\label{integral-pairing}\n(\\;\\cdot, \\cdot)_N: S_k(N, \\mathcal{O})_\\mathfrak{m}\\times S_k(N,\\mathcal{O})_\\mathfrak{m} \\rightarrow \\mathcal{O},\n\\end{equation} \nwhich satisfies the equivariance condition $(f_1\\vert t, f_2) = (f_1, f_2\\vert t)$.\nThis is Hida's algebraic inner product (see \\cite{hida_1993}, Chapters 7 and 8). \nUnlike the usual Petersson product, it is linear in both variables,\n and the Hecke operators are self-adjoint. \n\nNow let $f\\in S(N, \\mathcal{O})$ denote our fixed eigenform, and consider the function \n\\begin{equation}\n\\label{alg-pairing}\n\\phi_f: v \\mapsto (f, v)_N,\n\\end{equation}\n for $v\\in S(N, \\mathcal{O})_\\mathfrak{m}$.\n Let $f^{\\perp}\\subset S(N, \\mathcal{O})_\\mathfrak{m}$ denote the kernel of $\\phi_f$, and let $\\eta_f = (f, f)_N = \\phi_f(f)$. \n We would like to say \n that the number $\\eta_f$ is nonzero. This is not true in general, but it will be true under the assumptions we have made in Section \n \\ref{assumptions-and-definitions}. Let $K$ denote the fraction field of $\\mathcal{O}$.\n \n \\begin{Lemma}\\label{Tm is a field} Let $\\mathcal{P}=\\mathcal{P}_f$ denote the kernel\n of the homomorphism $\\mathbf{T}_\\mathfrak{m}({\\mathcal O})\\rightarrow \\mathcal{O}$ associated to $f$. Then, there is an isomorphism\n $\\mathbf{T}_\\mathfrak{m}({\\mathcal O})_{\\mathcal{P}}\\simeq K$.\n \\end{Lemma}\n \n \\begin{proof} Setting $S:=\\mathbf{T}_\\mathfrak{m}({\\mathcal O})\\otimes\\mathbb{Q}_p$, we note that $S$ is a finite-dimensional algebra over $K$. Hence, $S$ is\n the product of local rings $R_i$, each of which is finite dimensional over $K$. The ring $R_i$ corresponds to a height one prime ideal \n of $\\mathbf{T}_\\mathfrak{m}({\\mathcal O})$. The localization of $\\mathbf{T}_\\mathfrak{m}({\\mathcal O})$ at $\\mathcal{P}$ is equal to one of the rings $R=R_i$. Thus, we are to show that $R$ is a field. \n The subalgebra $\\mathbf{T}'$ of $\\mathbf{T}_\\mathfrak{m}({\\mathcal O})\\otimes\\mathbb{Q}$ generated by the Hecke\n operators prime to the level is semisimple, and hence the product of fields $K_i\\simeq K$, each corresponding to a newform of level $N$. Thus, $K\\subset R$ is the image\n of $\\mathbf{T}'$ in $R$. Note that the summands $K_i$ are not necessarily in bijection with the summands $R_j$, since $K_i$ could potentially be contained in multiple $R_j$s. We have a homomorphism $\\mathbf{T}'\\rightarrow K\\hookrightarrow R$ with kernel\n $\\mathcal{P}'$. By construction, the ideal $\\mathcal{P}$ lies above $\\mathcal{P}'$.\n \n\\par Consider the subspace of forms in $S(N, {\\mathcal O})_\\mathfrak{m}\\otimes\\mathbb{Q}_p$ annihilated by the ideal $\\mathcal{P}'$. \n By duality, it suffices to show that this subspace is $1$-dimensional over $K$. To achieve this, recall that \n the newform associated to $f$ is the form $g_0$ at level $M_0$, which differs from $N$\n only at the prime $p$, and at primes $q\\in\\Sigma$. Note that if $q\\in\\Sigma$ and $q\\neq p$, then $U_qf=0$. \n \n \\par The argument follows by adding one prime at a time to the level. Let $q\\in\\Sigma$, and let\n $N_q = M_0q^{e_q}$, where $q^{e_q}$ is the largest power of $q$ dividing $N$. If $q=p$, let $N_p=M=M_0p$.\n Then consider the space $S_q$ given as follows.\n \\begin{itemize}\n \\item If $q=p$, and $g_0$ has level divisible by $p$, then $S_q$ is generated by $g_0=g$. \n \\item If $q=p$, and $g_0$ has level prime to $p$,\n then $S_p$ is spanned by $g_0(z)$ and $g_0(pz)$. \n \\item If $q\\neq p$ and $e_q=1$, so $g_0$ has level exactly divisible by $q$, $S_q$ is spanned by the $g_0(z), g_0(qz)$.\n \\item If $q\\neq p$ and $e(q)=2$, so $g_0$ has level prime to $q$, $S_q$ is spanned by \\[\\{g_0(z), g_0(qz), g_0(q^2z)\\}.\\]\n \\end{itemize}\n Each of these spaces\n is stable under the Hecke operator $U_q$, and is annihilated by $\\mathcal{P}'$. The eigenvalues of $U_q$ \n are given as follows.\n \n \\begin{itemize}\n \\item In the first case, $U_p$ has the eigenvalue \n $\\alpha_p$, which is a $\\mathfrak{p}$-adic unit.\n \\item In the second, the eigenvalues are $\\alpha_p$, $\\beta_p$, and $\\beta_p$ is a non-unit.\n \\item In the third, the eigenvalues are $\\alpha_p=\\pm 1$ and $0$.\n \\item In the fourth, we have $\\alpha_p, \\beta_p, 0$, and \n $\\alpha_p\\beta_p=q$, so both these numbers are units.\n \\end{itemize} \n We claim that, in each case, the localization of $S_q$ at $\\mathfrak{m}$ has dimension $1$. In the case when $q=p$, the localization of $S_q$ at $\\mathfrak{m}$ is one-dimensional. This is because $U_p$ is not $\\mathfrak{m}$ and one of the two eigenvalues $\\alpha_p$ and $\\beta_p$ is a $p$-adic unit and the other is not. On the other hand, in the case when $q\\neq p$,\n $U_q\\in \\mathfrak{m}$. Since, $U_q$ has a unique non-unit eigenvalue ($q\\neq p)$ it follows that the localization of $S_q$ at $\\mathfrak{m}$ has dimension $1$.\n \\par An iteration of this argument over the primes $q$, using the fact that the level raising operators commute with Hecke operators away from the level, and replacing\n the form $g_0$ with the $1$-dimensional space produced in the previous step, implies that our space is $1$-dimensional. In greater detail, express $\\Sigma$ as $\\{q_1,\\dots, q_N\\}$ and for $m\\leq N$, set $\\Sigma_m:=\\{q_1, \\dots, q_m\\}$ and $N_m$ be the largest divisor of $N$ which is divisible by $M$ and the primes in $\\Sigma_m$. Assume that the subspace of $S_k(N_m, \\mathcal{O})_{\\mathfrak{m}}\\otimes \\mathbb{Q}_p$ which is annihilated by $\\mathcal{P}'$ is one-dimensional over $K$, and let $g_m(z)$ be a generator of this one-dimensional space. Then, apply the same argument as above to $g_m(z)$ in place of $g_0(z)$, to prove that the subspace of $S_k(N_{m+1}, \\mathcal{O})_{\\mathfrak{m}}\\otimes \\mathbb{Q}_p$ which is annihilated by $\\mathcal{P}'$ is one-dimensional over $K$. This inductive argument shows that the subspace of $S_k(N, \\mathcal{O})_{\\mathfrak{m}}\\otimes \\mathbb{Q}_p$ which is annihilated by $\\mathcal{P}'$, and the result follows from this.\n\\end{proof}\n \n \n\\begin{Corollary} The quantity $\\eta_f = (f, f)_N$ is is non-zero.\n\\end{Corollary}\n\n\\begin{proof} It suffices to prove the corollary upon extending the pairing to $S(N, {\\mathcal O})_\\mathfrak{m}\\otimes\\mathbb{Q}\\cong\\oplus R_i$. Let $(\\cdot, \\cdot)_{i,j}$ be the restriction of the pairing $(\\cdot , \\cdot)_N$ to $R_i\\times R_j$. It follows from Hecke-equivariance that this pairing $(\\cdot, \\cdot)_{i,j}$ is $0$ if $i\\neq j$. Since the pairing $(\\cdot, \\cdot)_N$ is non-degenerate, it follows that \n\\[(\\cdot, \\cdot)_{i,i}: R_i\\times R_i\\rightarrow K\\] is non-zero. On the other hand, since Lemma \\ref{Tm is a field} gives $R_i\\simeq K$, it follows from $K$-linearity that $(x,x)\\neq 0$ for all $x\\in R_i$ such that $x\\neq 0$. Note that since $f$ is an eigenform, $f\\in R_i$ for some $i$, and hence, we have that $(f,f)_N\\neq 0$.\n \\end{proof}\n \n To continue, let $e$ denote a fixed generator of the rank-$1$ $\\mathbf{T}_\\frak{m}$-module $S(N, \\mathcal{O})_\\mathfrak{m}$,\nand consider the number $\\phi_f(e)= (f, e) \\in\\mathcal{O}$. Then \nany element of $S(N, \\mathcal{O})_\\mathfrak{m}$ is of the form $t\\cdot e$ for $t\\in \\mathbf{T}_\\mathfrak{m}$, so the Hecke equivariance of the pairing shows that \\[(f, t\\cdot e)_N=(f\\vert t, e)_N = a(1, f\\vert t)(f, e)_N.\\] In particular, it follows that $f^\\perp$ is the submodule \n$\\mathcal{P}S(N, \\mathcal{O})_\\mathfrak{m}$, and \\[S(N, \\mathcal{O})_\\mathfrak{m}\/f^\\perp\\cong (\\mathbf{T}_\\mathfrak{m}\\otimes \n\\mathcal{O})\/\\mathcal{P}(\\mathbf{T}_\\mathfrak{m}\\otimes \n\\mathcal{O})\\cong \\mathcal{O},\\]\nwhere $\\mathcal{P}$ is the kernel of the canonical homomorphism \n$\\mathbf{T}_\\mathfrak{m}({\\mathcal O})\\rightarrow \\mathcal{O}$ associated to $f$. Since $(f, f)_N$ is nonzero, we find that \n$f\\notin \\mathcal{P}S(N, \\mathcal{O})_\\mathfrak{m}$, and that the function $\\phi_f$ is determined by the nonzero number \n$\\eta_f = (f, f)_N\\in \\mathcal{O}$.\n\nNext we need to compare the algebraic pairing defined above to the usual Petersson inner product. Thus given a modular form $v(z)=\\sum a_nq^n\\in S(N, \\mathbf{C})$,\ndefine $v^c(z)=\\sum \\overline{a}_n q^n$, where the bar denotes complex conjugation. We define a modified Petersson product on $S(N, \\mathbf{C})$ by setting\n\\begin{equation}\n\\label{modified-petersson}\n\\{v, w\\}_N = \\langle v, w^c\\vert W_N\\rangle_N\n\\end{equation}\nwhere the pairing on the right is the Petersson product. One sees from the definition that $\\{\\cdot, \\cdot\\}_N$ is $\\mathbf{C}$-linear in both \nvariables, and that it satisfies $\\{v\\vert t, w\\}_N= \\{v, w\\vert t\\}_N$, for any Hecke operator $t$, just like the algebraic pairing defined above. \n\nRecall that $\\mathcal{O}$ is the completion of the ring of integers of a number field at a prime $\\mathfrak{p}$ \ncorresponding to an embedding in to\n$\\mathbf{C}_p$. Making an identification of $\\mathbf{C}$ with $\\mathbf{C}_p$, we \nfind that the space $S(N, \\mathcal{O})_{\\mathfrak m}$ is equipped with\ntwo $\\mathbf{C}_p$-valued pairings $(\\cdot, \\cdot)_N$ and $\\{\\cdot, \\cdot\\}_N$. Each pairing is bilinear, and renders the Hecke operators self-adjoint. \nJust as in the algebraic case, we have a function $\\phi_f^\\infty: S(N, \\mathcal{O})\\rightarrow \\mathbf{C}_p$ defined by $v \\mapsto \\{f, v\\}_N$,\nand the adjointness implies that the kernel of $\\phi_f^\\infty$ is the submodule $\\mathcal{P}S(N, \\mathcal{O})_\\mathfrak{m}$. Thus we have two different\n$\\mathbf{C}_p$-valued functions on the rank 1 $\\mathcal{O}$-module $S(N, \\mathcal{O})_\\mathfrak{m}\/\\mathcal{P}S(N, \\mathcal{O})_\\mathfrak{m}$,\nand to compare them, it suffices to evaluate on any given element, say on $f$ itself. One is therefore led to consider\n$\\{f, f\\}_N = \\langle f, f^c\\vert W_N\\rangle_N$, in terms of the usual Petersson product. It is not clear from the definition that this number\nis nonzero; that it is so follows from the same argument that was used in the algebraic case above.\n\n\\begin{Definition}\n\\label{invariant-period}\nDefine a period associated to $f$ and the level $N$ via $\\Omega_{N} = \\frac{\\{f, f\\}_N}{(f, f)_N}$. \n\\end{Definition}\n\nAs stated, this definition depends on the level $N$. We would like to claim\nthat in fact $\\Omega_{N}$ is independent of $N$, and depends only on the \n$p$-stabilized newform $g$, up to a ${\\mathfrak p}$-adic unit. More precisely, we would \nlike to assert that $$\\Omega_N = \\text{unit}\\cdot \\Omega_{M}$$ where $M=M_0p$.\nHere $\\Omega_M$ is defined by the same prescription as before:\n$$\\Omega_M =\\frac{\\{g, g\\}_M}{(g, g)_M}$$\nwhere the pairings at level $M$ are derived from the Gorenstein condition and the modified\nPetersson product at level $M$. This is Hida's canonical period. The construction is easier\nin this case, since multiplicity one at level $M$ is automatic.\n\nUnfortunately, we cannot quite prove this claim for general weight $k$. \nThe case of weight 2 is known -- this is due\nto Diamond, see \\cite[Theorem 4.2]{dpnas},\nand relies on Ihara's lemma. While there are various\nversions of Ihara's lemma known for weight $k> 2$, the specific version variant\nneeded here does not seem to be available.\n\nThus, we will state the precise variant of Ihara's lemma that we need, and make some remarks\nabout what is known and what is required. \nWe will then prove the independence of the period from the \nauxiliary level under the assumption that a suitable Ihara-type lemma holds.\n\nTo set the framework, fix a prime \n$p\\geq 5$, and consider integers\n$A$, $B$ (the levels), together with an auxiliary odd prime $q\\neq p$. \nWe assume that $A\\vert B$, and that one of \nthe two following conditions holds:\n\\begin{enumerate}\n \\item $A =q^2B$, and $(B,q)=1$, or\n \\item $A=qB$ and $B$ is divisible by precisely the first power of $q$. \n\\end{enumerate}\nLet $\\mathbf{{\\mathbf{T}}}_A$ and ${\\mathbf{T}}_B$ denote the Hecke rings generated by all the Hecke\noperators, including $U_q$ of $T_q$, at levels $A$ and $B$ respectively. Recall that we assume\ntrivial nebetype character, so the group in question is $\\Gamma_0$. \nLet $S(A), S(B)$ denote the lattice of cuspforms of levels $A, B$ respectively\nwhose Fourier coefficients are in ${\\mathcal O}$. Let $S(A, {\\mathbf{C}}), S(B, {\\mathbf{C}})$ denote\nthe corresponding complex vector spaces. We have Hecke-equivariant\nand ${\\mathbf{C}}$-bilinear and perfect analytic pairings\n$\\{\\cdot, \\cdot\\}_A:S(A, {\\mathbf{C}})\\times S(A, {\\mathbf{C}})\\rightarrow{\\mathbf{C}}$ and \nand $\\{\\cdot, \\cdot\\}_B: S(B, {\\mathbf{C}})\\times S(B, {\\mathbf{C}})\\rightarrow{\\mathbf{C}}$,\ndefined as above. \nThen let $L_A, L_B$ denote the lattices in $S(A, {\\mathbf{C}}), S(B, {\\mathbf{C}})$ that are\n${\\mathcal O}$-dual to $S_A, S_B$ respectively. Namely, we have $x\\in L_A$ if and only\nif $\\{x, s\\}_A\\in{\\mathcal O}$, for all $s\\in S_A$, and similarly for $S_B, L_B$. Then\nwe have $L_A\\cong {\\mathbf{T}}_A$ as a ${\\mathbf{T}}_A$-module, and similarly $L_B\\cong {\\mathbf{T}}_B$ over\n${\\mathbf{T}}_B$. \n\nNext, we define a map $\\tau:S(A, {\\mathbf{C}})\\rightarrow S(B,{\\mathbf{C}})$, as follows.\nLet $h = h(z)\\in S(A,{\\mathbf{C}})$, where $z$ denotes a variable in the upper half plane.\nWe define $\\tau$ via \n\\[\\tau\\left(h(z)\\right) = \\begin{cases} h(z) - (U_q h)(qz) &\\text{ if }B=Aq,\\\\\nh(z) - (T_q h)(qz) + q^{k-1}h(q^2z) &\\text{ if }B=Aq^2.\n\\end{cases}\\]\nIt is clear that $\\tau\\left(S(A)\\right)\\subset S(B)$, and that the image is stable\nunder ${\\mathbf{T}}_B$. To check the stability under $U_q$, one can calculate explicitly that\n$U_q=0$ on the image. Thus $\\tau$ is a map that removes the Euler factor at $q$.\n\nNow let $h_A\\in S(A)$ be a modular form that is an eigenvector for every \nelement $t\\in{\\mathbf{T}}_A$. Let $\\mathcal{P}_A$ denote the kernel of the homomorphism $\\phi_A:{\\mathbf{T}}_A\\rightarrow {\\mathcal O}$ associated to $h_A$. \nLet ${\\mathfrak m}_A$ be the maximal ideal of ${\\mathbf{T}}_A$ corresponding\nto the inverse image of the maximal ideal of ${\\mathcal O}$, under $\\phi_A$. \nThen, $h_B=\\tau(h_A)$ is an eigenvector for ${\\mathbf{T}}_B$ (in fact, with $U_qh_B=0$). \nWe may repeat the constructions above for $h_B$, and obtain a height\none prime $\\mathcal{P}_B$ and a maximal ideal ${\\mathfrak m}_B$ inside ${\\mathbf{T}}_B$.\nThe ideals $\\mathcal{P}_A, \\mathcal{P}_B, {\\mathfrak m}_A, {\\mathfrak m}_B$ are required to satisfy additional properties, which we record below:\n\\begin{itemize}\n \\item The localizations of ${\\mathbf{T}}_A, {\\mathbf{T}}_B$ at ${\\mathfrak m}_A, {\\mathfrak m}_B$ respectively are\n both Gorenstein, and \n \\item The localizations of ${\\mathbf{T}}_A, {\\mathbf{T}}_B$ at $\\mathcal{P}_A, \\mathcal{P}_B$ are fields isomorphic\n to the fraction field $K$ of ${\\mathcal O}$.\n\\end{itemize}\nIn our case, we have presumed that ${\\mathfrak m}_A$ and ${\\mathfrak m}_B$ are such that\nthe residual representations at ${\\mathfrak m}_A$ and ${\\mathfrak m}_B$ are absolutely irreducible and $p$-distinguished. Then, the first property is satisfied. It follows from Lemma \\ref{Tm is a field} that the second condition is also satisfied.\n\nWith these assumptions in place, we can now state the Ihara-type results that we need.\n\n\\begin{hyp}\\label{hypothesis ihara} With the conditions and notations above, and any choice \nof $A, B, q$ as above, we have\n\\begin{itemize}\n \\item (Ihara-1) We have $\\tau(L_A)\\subset L_B$, and\n \\item (Ihara-2) $L_B\/\\tau(L_A)$ is ${\\mathcal O}$-torsion-free.\n\\end{itemize}\n\\end{hyp}\n\n\\begin{Remark} As we have already said, \nwe cannot prove these hypotheses in full generality. Thus we limit ourselves\nto some general comments\nhere about their validity. \nThe map $\\tau: S(A,{\\mathbf{C}}) \\rightarrow S(B, {\\mathbf{C}})$ is completely\nexplicit in terms of the usual degeneracy maps of modular curves. To analyze \nthe map $L_A\\rightarrow S(B, {\\mathbf{C}})$ in Ihara-1, and to check that $\\tau$ carries $L_A$\nto $L_B$, one has dualize everything. \nThis is not hard, and can be carried out without \nundue difficulty, although there are various compatibilities to check. \nFor the details,\nwe refer to the forthcoming thesis of Maletto.\n\n\\par However, Ihara-2 is much more delicate. The only known approach\nis to relate the lattices\n$L_A, L_B$ to the parabolic cohomology of the modular groups in question, and \nthen prove the corresponding results for group cohomology. This is standard\nfor weight 2 (see \\cite{wil95}). In the case of weight $k > 2$ and auxiliary level\n$q$, cohomological results due to Diamond for $p> k-2$, and to Manning-Shotton \\cite{manning2021ihara}\nfor general odd $p$, presumably imply the result, although in the case of small\n$p$, one has to be careful with the dualities, since the pairings on the coefficient\nmodules in the cohomology are not in general perfect. For all this, and the missing\ncase of auxiliary level $q^2$, which is not treated at all in the literature,\nwe refer once again to work in progress of Maletto.\n\\end{Remark}\n\nNow we return to the situation of Definition \\ref{invariant-period}. Consider\na $p$-stabilized newform $g$ of level $M$, and the oldform $f$ of level \n$N$ associated to a choice of $\\Sigma$ as before. We want to show that the periods\nat level $N$ and $M$ are equal up to a unit. This turns out to be a simple\ninductive argument, once Ihara's lemma is known. \n\n\\begin{Lemma} Suppose that the Hypotheses Ihara-1 and Ihara-2 hold, for any\n$A, B, q$. Then $\\Omega_N = u\\Omega_N$ for some $p$-adic unit $u$.\n\\end{Lemma}\n\n\n\\begin{proof} We start at level $M$, and work our way upwards, adding one\nprime at time. To spell out the induction, \nwe start with a modular form $h_A$ at level $A$, and we move up\nto level $B=Aq$ or $Aq^2$, and replace $h_A$ with the $q$-depleted form $h_B$.\nIn this situation, we are required to show that the periods of $h_A$ and $h_B$\nare equal up to a unit. \n\nIt is clear from the definition of the periods that $\\Omega_A$ at level $A$ is \ncharacterized up to unit by the properties:\n\\begin{itemize}\n \\item $\\delta_A:=\\Omega_A^{-1}\\cdot h_A$ is contained in $L_A$,\n \\item $L_A\/{\\mathcal O}\\delta_A$\nis torsion-free.\n\\end{itemize} Similarly, the period $\\Omega_B$ at level $B$ is characterized\nby:\n\\begin{itemize}\n \\item $\\delta_B:=\\Omega_B^{-1}\\cdot h_B$ is contained in $L_B$,\n \\item $L_B\/{\\mathcal O}\\delta_B$\nis torsion-free.\n\\end{itemize} Since $\\tau(h_A)=h_B$ by definition, Ihara-1 \nshows that $\\delta_B':=\\tau(\\delta_A)=\\tau(\\Omega_A^{-1}h_A)$ is contained in $L_B$. Let $u$ be such that $\\delta_B=u \\delta_B'$. We show that $u$ is a $p$-adic unit. We have that $\\delta_B'\\in L_B$ and $L_B\/\\mathcal{O} \\delta_B$ is torsion-free. Hence, $u^{-1}$ is contained in $\\mathcal{O}$. According to\nIhara-2, $L_B\/\\tau(L_A)$ is torsion-free. We have that $u^{-1}\\delta_B=\\delta_B'=\\tau(\\delta_A)\\in \\tau(L_A)$. Hence, $\\delta_B$ is contained in $\\tau(L_A)$, and we write $\\delta_B=\\tau(\\eta_A)$. Since the map $\\tau$ is injective, it follows that $u^{-1} \\eta_A=\\delta_A$. Since $L_A\/\\mathcal{O}\\delta_A$ is torsion-free, it follows that $u\\in \\mathcal{O}$. We have shown that $u\\in \\mathcal{O}$ and $u^{-1}\\in \\mathcal{O}$, so we have deduced that $u\\in \\mathcal{O}^\\times$.\nTherefore $\\Omega_A=u\\Omega_B$ for some unit $u$.\n\\end{proof}\n\n\\begin{Remark}\nTo get a nice formula at the end, and to check that our final\nformulae agree with those in \\cite{lz16}, we need to further\ncalculate further. \nWe have shown above that the ratio of the algebraic\nand analytic pairings at level $M$ and level $N$ are the same. It remains to express everything in terms of the \nnewform $g_0$ associated to $f$ and $g$. On other words, we have to bring everything down to level $M_0$. There are\ntwo cases to consider, depending on whether or not the $p$-stabilized form is new or old at $p$ (so $M=M_0p$ \nor $M=M_0$). \n\nStart with the case that $g_0$ is old at $p$. Then a further calculation (which we omit) shows that \n$\\{g, g\\}_M = E_p\\cdot \\langle g_0, g_0\\rangle_{M_0}$, where\n$E_p = \\pm p^{1-k\/2} \\alpha_p(1-\\frac{p^{k-2}}{\\alpha_p^2})(1-\\frac{p^{k-1}}{\\alpha_p^2})$, and \n$g_0$ is the newform of level $M_0$ associated\nto $f$, and $\\alpha_p$ is the unit root of the Hecke polynomial. The factors\n$1-\\frac{p^{k-2}}{\\alpha_p^2}$ and $1-\\frac{p^{k-1}}{\\alpha_p^2}$ are units for $k>2$. \nWhen $k=2$, the term $1-1\/\\alpha_p^2$\nmay be a non-unit; this is so precisely when $g_0$ is congruent to a $p$-new form of of level $pM_0$.\nThe number $1-1\/\\alpha_p^2$ is the relative congruence number of Ribet. \n\nIn this situation, we define $$(g_0, g_0) = \\frac{(g, g)_{M}}{1-1\/\\alpha_p^2}.$$\nNote that $\\alpha_p\\neq \\pm 1$, by the Weil bounds.\nWe remark that it can be shown that in fact $(g_0, g_0)$ as defined above coincides with the pairing of $g_0$ with \nitself defined via a Gorenstein pairing at level $M_0$ (as opposed to the $p$-stabilized level \n$M=M_0p$). We don't need the this result, but mention it simply to justify the notation. Wes refer the reader to \\cite{wil95}, Chapter 2, Section 2, for a full discussion of relative congruence numbers in weight 2.\n\nIf $f$ is new at $p$ (and hence of weight 2), then of course $\\{f, f\\}_M = E_p \\langle g, g\\rangle_M$\nwhere $E_p=\\pm 1$ is the eigenvalue of the Fricke involution. \n\nThus, we define a canonical period $\\Omega$ associated to the newform $g_0$ corresponding to $f$ as follows:\n$$\\Omega =\\Omega_{g} = {\\Omega_M}.$$\n\nEvidently, $\\Omega_M = \\op{unit}\\cdot {\\Omega_N}$. \nWe can evaluate the period $\\Omega$ more explicitly, as follows. If $f$ is old at $p$, we have \n$$\\Omega_M = \\frac{\\{g, g\\}_M}{( g, g)_M} = E_p \\frac{\\langle g_0, g_0\\rangle_{M_0}}{( g, g)_{M}}=\n\\text{unit}\\cdot p^{1-k\/2}\\frac{\\langle g_0, g_0\\rangle_{M_0}}{( g_0, g_0)_{M_0}}.\n$$\nIf $f$ is new at $p$, so that $g=g_0$ and $M=M_0$, and we are in weight 2, we have \n$$\n\\label{period2} \\Omega_M = \\frac{\\{g, g\\}_M}{( g, g)_M} = E_p \\frac{\\langle g_0, g_0\\rangle_{M_0}}{( g_0, g_0)_{M_0}}\n=\\text{unit} \\cdot \\frac{\\langle g, g\\rangle_M}{( g, g)_M}\n$$\n\nA common way of expressing the above formulae is \n\\begin{equation}\n\\label{can-period}\n\\Omega = \\Omega_g = \\text{unit}\\cdot p^{1-k\/2}\\cdot \\frac{\\langle g_0, g_0\\rangle_{M_0}}{( g_0, g_0)_{M_0}}.\n\\end{equation}\n\n\\end{Remark}\n\n\\begin{Remark} We remind the reader that $g$ is the $p$-stabilized newform associated to the newform $g_0$. The numerator\nin the quotient appearing in the expression above is the usual Petersson norm of the newform $g_0$. The quotient in the expression above is precisely \nthe period appearing in \\cite{lz16}. The additional \nfactor $p^{1-k\/2}$ which shows up when \n$k >2$ is important -- it shows up in the formulae (\\ref{petersson-formula}), and those of Schmidt in \n\\cite{schmidt86}, \\cite{schmidt88}, where it is simply carried \naround, and the eventual result is proven only up to a rational\nconstant. As we will see, the factor appearing in our $\\Omega$\nis exactly what is needed to cancel unwanted powers of $p$ arising from (\\ref{petersson-formula}).\n\\end{Remark}\n\nIf $h\\in S(N,{\\mathcal O})_{\\mathfrak m}$ is arbitrary, then we have $\\frac{(f, h)_N}{(f, f)_N}=\n\\frac{\\{f, h\\}_N}{\\{f, f\\}_N} = \\frac{\\{f, h\\}_N}{(f, f)_N\\Omega_{N}}$.\nIn view of the independence of the period on the level, we get the following key evaluation formula, valid for any $h\\in S(N, {\\mathcal O})_{\\mathfrak m}$:\n\n\n \\begin{Proposition} \n \\label{evaluation-formula} Assume that the Ihara hyotheses are valid. Then\nwe have $(f, h)_N = \\frac{\\{f, h\\}_N}{\\Omega_{N}} =\\op{unit}\\cdot \\frac{\\{f, h\\}_N}{\\Omega_{g}}$. The quantity $ \\frac{\\{f, h\\}_N}{\\Omega_{g}}=\n\\op{unit}\\cdot p^{1-k\/2}\\cdot\n \\frac{\\{f, h\\}_N\\cdot (g_0, g_0)_{M_0}}{\\langle g_0, g_0\\rangle_{M_0}} $ is ${\\mathfrak p}$-integral.\n \\end{Proposition}\n\n\\begin{Remark} Our next task will be to apply the machinery developed above to the case where $h=\\mathcal{H}_{\\chi}(n)$ is derived from a product \nof a theta\nseries and and Eisenstein series. However, there are two problems. First, this product is unlikely to be cuspidal, and second, it is not an element \nof $h\\in S(N, {\\mathcal O})_{\\mathfrak m}$. Some care is therefore required. The number $\\{f, h\\}_N = \\langle f, h^c\\circ W_N\\rangle_N$ \n makes sense for any $h\\in M_k(N, {\\mathcal O})\\otimes\\bar{\\Q}$, since $f$ is cuspidal. Since the maximal ideal ${\\mathfrak m}$ corresponding to $f$ is residually irreducible,\n we find that if $e_{\\mathfrak m}$ is the idempotent in the Hecke algebra ${\\mathbf{T}}=\\oplus {\\mathbf{T}}_{{\\mathfrak m}_i}$ corresponding\nto the maximal ideal ${\\mathfrak m}$, then $h\\circ e_{\\mathfrak m}$ is cuspidal for any modular form $h$ of level $N$ and weight $k$. We claim now that\n that $\\{f, h\\circ e_{\\mathfrak m} \\}_N=\\{f, h\\}_N$. To see this, note that $f\\circ e_{\\mathfrak m} = f$, so we get $$\\langle f, h^c\\circ W_N\\rangle_N = \n \\langle f\\circ e_{\\mathfrak m}, h^c\\circ W_N\\rangle_N = \\langle f, h^c\\circ W_N\\circ e_{\\mathfrak m}^*\\rangle = \\langle f, h^c\\circ e_{\\mathfrak m}\\circ W_N\\rangle_N.$$ \n \n \n \nThus we may replace $h$ with $h\\circ e_{\\mathfrak m}$, and define $( f, h)_N = (f, h\\circ e_{\\mathfrak m})_N$ for any $h\\in M_k(N, {\\mathcal O})\\otimes\\bar{\\Q}$.\nThen the same formalism as above applies. In particular, \nwe still have $\\frac{(f, h)_N}{(f, f)_N}=\n\\frac{\\{f, h\\}_N}{\\{f, f\\}_N} = \\frac{\\{f, h\\}_N}{(f, f)_N\\Omega_{f, N}}$, and the conclusion of Proposition \\ref{evaluation-formula} applies without change.\n\\end{Remark}\n\n\\subsection{Integrality}\nIn view of the considerations above, we are led to compute the algebraic pairings $( f, \\mathcal{H}_\\chi(n))_N$ and $(f, \\mathcal{H}_\\chi(n))_N$, with\n$\\mathcal{H}_\\chi(n)$ and $\\mathcal{H}_\\chi^m(n)$ being as defined in (\\ref{hchi}) and (\\ref{hchim}). We know already that \n$\\mathcal{H}_\\chi(n)$ and $\\mathcal{H}_\\chi^m(n)$ are integral, hence the corresponding pairings are integral as well. It remains only to\nrelate them to special values of $L$-functions. The starting point is Proposition \\ref{evaluation-formula}, which reduces the calculation\nto that of analytic pairings $\\{f, \\mathcal{H}_\\chi(n)\\}_N$ and $\\{f, \\mathcal{H}_\\chi(n)\\}_N$.\n\nRecall that $\\chi=\\psi\\eta$ with $\\eta$ ramified of conductor $m_\\chi$. Pick any $m\\geq m_\\chi$. \nWe start with $\\{f, \\mathcal{H}_\\chi^m(n)\\}_N = \\langle f, T_\\psi(\\tilde{H}_\\chi(n)^{\\op{hol}}\\circ U_p^{2m-1})^c\\circ W_N\\rangle$. Since $T_\\psi$\nand complex conjugation commute, there exists a constant $C$ such that we get\n\\begin{align*}\n\\{f, \\mathcal{H}_\\chi^m(n)\\}_N &= \\langle f, T_\\psi(\\tilde{H}_\\chi(n)^{\\op{hol}}\\circ U_p^{2m-1})^c\\circ W_N\\rangle_N\\\\\n& = \\langle f\\circ W_N, \\tilde{H}_{\\overline\\chi}(n)^{\\op{hol}}\\circ U_p^{2m-1}\\circ W_{c_\\psi^2}\\circ t_\\psi \\rangle_N\\\\\n& = \\langle f\\circ W_N, \\tilde{H}_{\\overline\\chi}(n)^{\\op{hol}}\\circ U_p^{2m-1}\\circ W_{c_\\psi^2}\\rangle_{N_\\psi}\\\\\n& =C\\cdot \\langle f\\circ W_N, {H}_{\\overline\\chi}(n)^{\\op{hol}}\\circ W_{N_\\chi}\\circ U_p^{2m-1}\\circ W_{c_\\psi^2}\\rangle_{N_\\psi}\\\\\n& = C \\cdot \\langle f\\circ W_N, {H}_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m-1}\\circ W_{c_\\psi^2}\\rangle_{N_\\psi}\\\\\n& = C \\cdot \\langle f, {H}_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m-1}\\circ W_{Nc_\\psi^2}\\rangle_{N_\\psi}.\n\\end{align*}\n\nThe constant $C$ comes from the definition of $\\tilde{H}_\\chi(n)$. In the last equality, we have used the fact that \nthe matrix defined by Atkin-Lehner in \\cite{al}, bottom of page 138, giving the involution $W_N$ at level $N_\\psi$ also satisfies the definition of $W_N$\nat level $N$. This fact can also be seen from the point of view of representation theory, since the Atkin-Lehner operators\nadmit a purely local definition.\n\nRecall\nequation (\\ref{petersson-formula}), which states that\n\\begin{equation*}\n(4\\pi)^{-n\/2}\\Gamma(n\/2)p^{(2m_\\chi-1)(k\/2-1)} D_f(\\chi,n) = \\alpha_p^{2(m_\\chi-m)} \\langle f, H_{\\overline\\chi}(n)\\circ W_{N_\\chi}\\circ U_p^{2m-1}\\circ W_{N_\\psi}\\rangle_{N_\\psi}.\n\\end{equation*}s\n\nUsing this formula, and plugging in all the definitions, we obtain\n\\begin{Corollary} \n\\label{petersson-formula-explici-riou twisted trace\nt}\nSuppose that $\\eta$ is ramified and $m\\geq_\\chi$ is any integer. \nThere exists a ${\\mathfrak p}$-adic unit $u$ depending only on $n$ and $k$ such \nthat we have $$u\\cdot \\frac{p^{1-k\/2}}{\\pi^{n}}\\left( \\frac{p^{n-1}}{\\psi(p)}\\right)^{m_\\chi} \\left(\\frac{1}{\\alpha_p^2}\\right)^{m-m_\\chi}\n\\cdot g(\\overline{\\eta})\\cdot D_f(\\chi,n) = \n \\{ f, \\mathcal{H}^m_\\chi(n)\\}_N.$$ where $\\mathcal{H}_\\chi^m(n) = T_\\psi(\\tilde{H}_\\chi(n)^{\\text{hol}}\\circ U_p^{2m-1})\\in M_k(N, {\\mathcal O})\\otimes\\bar{\\Q}$\n has $\\mathfrak{p}$-integral coefficients, and \n $\\tilde{H}_\\chi(n)= \\frac{\\Gamma((n+1)\/2)}{\\pi^{(1+n)\/2}} p^{m_\\chi(3-2k+2n)\/2} \\cdot \\frac{\\sqrt{c_\\psi p^{m_\\chi}}}{g(\\chi)} \\cdot H_{\\chi}(n)\\circ W_{N_\\chi}$.\n\n\\end{Corollary}\n\n\\begin{proof} This is a direct computation, using Lemmas, \\ref{eisen-fourier} and \\ref{theta-fourier} \nand applying the doubling formula for the $\\Gamma$-function in the formula for the coefficients $d_{j,\\nu}$; see also Lemma 4.2 of \\cite{schmidt86}. \nOne has also to use the factorization $g(\\chi) = \\psi(p^{m_\\chi})\\eta(c_\\psi)g(\\psi)g(\\eta)$. \nThe constant $u$ collects up all the various powers of $2, i$, and other quantities prime to ${\\mathfrak p}$. \n\\end{proof}\n\n\nObserve that the formula above contains the nuisance factor $p^{1-k\/2}$, which also appears in our period. \nAssuming Hypothesis \\ref{hypothesis ihara}, so that $\\Omega_{g}=\\Omega_M = \\text{unit}\\cdot \\Omega_N$, \nwhere $g$ is the $p$-stabilized newform at level $M_0p$, \nand plugging in (\\ref{can-period}), \n we find that \n \\begin{align}\n \\label{integrality-formula-1}\n (f, \\mathcal{H}_\\chi^m(n))_N & = u\\cdot p^{1-k\/2}\\cdot \\left( \\frac{p^{n-1}}{\\psi(p)}\\right)^{m_\\chi} \\left(\\frac{1}{\\alpha_p^2}\\right)^{m-m_\\chi}\n\\cdot \\Gamma(n)\\cdot G(\\overline{\\eta})\\cdot\\frac{D_f(\\chi,n)}{\\pi^n \\Omega} \\\\\n& =u' \\cdot \\left( \\frac{p^{n-1}}{\\psi(p)}\\right)^{m_\\chi} \\cdot \\left(\\frac{1}{\\alpha_p^2}\\right)^{m-m_\\chi}\n\\cdot \\Gamma(n) \\cdot G(\\overline{\\eta})\\cdot\\frac{(g_0, g_0)_{M_0}}{\\pi^n \\langle g_0, g_0\\rangle_{M_0}}\\cdot D_f(\\chi,n).\n\\end{align}\nHere $u'$ is some other unit, independent of $\\chi$. \n\nFinally, we have to deal with $\\mathcal{H}_\\chi(n)\\circ e$. It is not hard to see that the twisted trace operator $T_\\psi$, which goes from level $N_\\psi$ to\nlevel $N$, commutes with the Hecke operator $U_p$, since $N_\\psi\/N$ has no common factor with $p$. There does not seem to be any particularly\npleasant way to deduce this fact from a classical perspective where the trace operator is given by matrices with rational integer entries,\nbut it is more or\nless obvious from the point of view of representation theory, since the local trace involves primes away from $p$, while $U_p$ is concentrated at $p$. \nIt is also evident it one considers modular forms as functions on test objects on moduli spaces of enhanced elliptic curves -- $U_p$ is a sum over certain\nsubgroups of order $p$, while the trace from level $N_\\psi$ involves subgroups of order prime to $p$. Anyway, we take this fact for granted, so that\n$$\\mathcal{H}_\\chi^m(n) = T_\\psi(\\tilde{H}_\\chi(n)^{\\text{hol}}\\circ U_p^{2m-1}) = \\mathcal{H}_\\chi^m(n) =( T_\\psi(\\tilde{H}_\\chi(n)^{\\text{hol}})\\circ U_p^{2m-1}.$$\nWe may then consider an suitable increasing sequence of integers $m$, divisible by $p-1$, so that the forms $\\mathcal{H}_\\chi^m(n) \\circ U_p$ converge to \n$\\mathcal{H}_\\chi(n)\\circ e$. The algebraic inner product is $p$-adically continuous as a function of the second variable, since it is a bounded ${\\mathcal O}$-linear functional,\nand we conclude that \n\\begin{equation}\n\\label{integrality-formula-2}\n (f, \\mathcal{H}_\\chi(n)\\circ e)_N =\\op{unit} \\cdot \\left( \\frac{p^{n-1}}{\\psi(p)\\alpha_p^2}\\right)^{m_\\chi} \n\\cdot \\Gamma(n) \\cdot G(\\overline{\\eta})\\cdot\\frac{(g_0, g_0)_{M_0}}{\\pi^n \\langle g_0, g_0\\rangle_{M_0}}\\cdot D_f(\\chi,n).\n\\end{equation} In particular, the right-hand side is integral.\n\nObserve that the quantity on the right is (up to the unit factor) \nthe one appearing in the definition of the\n$\\psi$-twisted $p$-adic L-function of Schmidt (with $\\psi$ being fixed, and $\\eta$ varying over characters of \n$p$-power conductor; see \\cite{schmidt86}, Theorems 3 and 4, or \\cite{lz16}, Theorem 2.3.2,\nwhere a formula for the $p$-adic L-function for $D_g(\\psi, s)$ is given. Observe that our formulation\nis slightly different than the one given in these references, owing to the the fact that we have normalized the period to give\na function that is integral, rather than simply bounded. Furthermore, our formulation incorporates the normalization factor\nof the congruence number $(g, g)_M$ alluded to in Proposition 2.3.5 of \\cite{lz16}. \n\n\n\n\n\\subsection{Level raising and congruences}\n\\label{congruence-section}\nIn view of the construction given above, it is more or less clear at this point to verify that that the $p$-adic L-functions satisfy\ngood congruences. To state the result, consider two $\\mathfrak{p}$-ordinary and $\\mathfrak{p}$-stabilized\nnewforms $g_1, g_2$, which are such that the residual \nrepresentations $\\overline{\\rho}_{g_1}$ and $\\overline\\rho_{g_2}$ are isomorphic. We assume, as always, that the nebentype\ncharacter is trivial. Let $M_0$ denote the prime-to-$p$\npart of the Artin conductor of $\\rho$. Then each $g_i$ has level $M_i$ divisible by $M=M_0p$. Furthermore, each \n$M_i$ is divisible\nby precisely the first power of $p$, and if\n $q\\vert M_i\/M$, then $\\text{ord}_q(M_i\/M)= 1, 2$, by Lemma \\ref{dtlemma}. \n \nIt is evident that the eigenvalues $a(q,f_i)$ are congruent modulo $\\mathfrak{p}$ for all primes $q$ away from $M_1M_2$.\nOur goal is to adjust the $g_i$ so that the Hecke eigenvalues are congruent at \\emph{all} primes. In order to apply the previous \narguments involving semisimplicity of the relevant local components of the Hecke algebra, \nwe want to ensure in fact that the eigenvalues of $T_q$ at the primes $q\\neq p$ that divide either $M_1$ or\n$M_2$ are simply equal to zero. Furthermore, both forms have to end up at the same level, and the level has to be sufficiently\nsmall as to maintain control over the semisimplicity of the bad Hecke operators. As we have already remarked\nin the introduction, it is clear that \nstriking out Euler factors gives forms with congruent Hecke eigenvalues, but is not\nat all clear that the the forms so-obtained actually have the same levels. \n\nWe remark also that this part of the argument \nrelies heavily on the fact that we are dealing with trivial central character. \n\nThus, fix $i$, and let $j=i+1$ modulo $2$ denote\nthe other choice. We will define two sets $\\Sigma_i, \\Sigma_j$ of primes $q$ and then take $\\Sigma:=\\Sigma_1\\cup \\Sigma_2$.\nIt suffices to just define $\\Sigma_i$, since the prescription for $\\Sigma_j$ is the same. \nConsider first the prime $2$. If $4$ divides $N_i$, we do nothing, since the $U_2$ eigenvalue is already zero. If $4\\nmid M_i$, \nwe put $2$ in $\\Sigma_i$. This increases the level by either 2 or 4. Now consider an odd prime $q$. The cases are as follows.\n\\begin{enumerate}\n\\item Suppose that $M_i$ is divisible by $q^2$. In this case, we do nothing. \n\\item Suppose that $M_i$ is divisible by precisely the first power of $q$. In this case, we place $q$ in $\\Sigma_i$. \n\\item Suppose that $q$ does not divide $M_i$. If $q$ divides $M_j$, then we put $q$ in $\\Sigma$. \n\\end{enumerate}\n\nFinally, we may enlarge both sets $\\Sigma_i$ and $\\Sigma_j$ by adding (to both!)\n finitely many other primes that do not divide either $M_1$ or $M_j$, or the conductor \nof $\\psi$ (which is assumed to be relatively prime to $2M_1M_2$). \n\nWith this choice of $\\Sigma_i, \\Sigma_j$, we replace $g_i$ (resp. $g_j$ with\n$f_i$ (resp. $f_j$) by striking out coefficients in the Fourier expansion of $g_i$ (resp. $g_j$) at the primes dividing \n$\\Sigma_i$ (resp. $\\Sigma_j$). Then we claim that $f_i$ and $f_j$ have the same level. Most of the following proposition\nis obvious, except for the second statement. It is harder to state the proposition clearly than to prove it. \n\n\\begin{Lemma} The following statements hold.\n\\begin{enumerate}\n\\item The sets $\\Sigma_i$ and $\\Sigma_j$ are such that $q\\in\\Sigma_i$ implies $\\text{ord}_q(N_i)\\leq 1$, and similarly for $\\Sigma_j$. \n\\item The forms $f_i, f_j$ are of the same \nlevel $N$, where $N\/M_i$ and $N\/M_j$ are cube-free. \n\\item If $N\/M_i$ is divisible by $q^2$, then $q\\nmid M_i$, and similarly for $M_{j}$.\n\\item If $N\/M_i$ divisible by exactly the first power of $q$, then $M_i$ is divisible by precisely\nthe first power of $q$, and similarly for the index $j$.\n\\item Each\nof $f_i$ and $f_j$ is an eigenvector for $U_q$ with eigenvalue $0$, for any prime $q\\neq p$ which divides $N$. \n\\item Each of $f_i$ and $f_j$ is an eigenvector for the operator $U_p$, with eigenvalues $\\alpha_i(p)$ and\n$\\alpha_j(p)$ respectively, and we have $\\alpha_i(p)\\equiv\\alpha_j(p)\\pmod{\\mathfrak{p}}$. \n\\end{enumerate}\n\\end{Lemma}\n\n\n\\begin{proof} This is yet another application of Lemma \\ref{dtlemma}. Consider first the prime $2$.\n If $4\\vert M_0$, then we are making no change at $2$, and both $f_i$ and $f_i$ have \nlevel exactly divisible by $4$. If $4\\nmid M_0$, then the $2$-part of $M_i$ and $M_j$ is bounded by $4$. \nSince we have $2\\in\\Sigma_i, \\Sigma_j$, we end up at level divisible by exactly 4. \n\n\nNow consider an odd prime $q$. If $q^2$ divides $M_0$, then each\nof the $g_i$ has the property that the $U_q$ eigenvalue is zero (since the central character is trivial) and $a(n, g_i)=0$ \nfor any $n$ divisible by $q$ in any case. Such primes are therefore excluded from the sets $\\Sigma_i,\\Sigma_j$, since\nno adjustment is needed in this case. In this situation, the prime $q$ does not divide either of $M_i\/M_0$ or \n$M_j\/M_0$ and the $q$-part of the levels is the $q$-part of $M_0$, which is the same in either case. \n\nNext, consider the case where $q\\vert M_0$ to the first power. In this case, $q^2$ may divide $M_i$ or $M_j$ or both.\nThe list of possible cases shows that $q^3$ does not divide either $M_i$ or $M_j$. If $M_i$ is divisible by $q^2$, \nwe do nothing. If $M_i$ is divisible by exactly $q$, then $q\\in\\Sigma_i$, and we end up with level $q^2$. In either case,\nwe get a level whose $q$-part is $q^2$. \n\nFinally, we have to deal with $q\\nmid M_0$. If $q^2$ divides $M_i$, then $q^2$ exactly divides $M_i$.\n Then we do nothing and we remain at level with $q$-part equal to $q^2$. \n If $q$ exactly divides $M_i$, then $q\\in\\Sigma_i$, and the level goes up by $q$ to \n$q^2$ once again. If $q\\nmid M_i$ then there are some sub-cases. If $q\\vert M_j$, then $q\\in \\Sigma_i$ and $f_i$\nhas level whose $q$-part is $q^2$, again. If $q\\nmid M_j$, then either $q$ is in both $\\Sigma_i$ and $\\Sigma_j$ or in neither. Then we get\neither no level at $q$ (and this case only occurs when $q$ is prime to everything in sight) or level $q^2$ at $q$ (in case\n$q$ divides one or both of $M_i, M_j$, or if $q$ is one of the supplementary primes that was added to both sets. \n \\end{proof}\n \n We can now state the theorems around congruences for the imprimitive symmetric square L-function, but we need to \n recall the hypotheses and notation.\n\nThus, suppose that $g_1, g_2$ are newforms of weight $k$, and \n level $M_1, M_2$ respectively, and that ${\\mathfrak p}$ is a prime\n of $\\bar{\\Q}$ with residue characteristic $p\\geq 5$ such that the the Fourier\n coefficients $a(q, g_i)$ satisfy the congruence $a(q, g_1)\\equiv a(q, g_2)\\pmod{{\\mathfrak p}}$, for each prime \n $q\\nmid N_1N_2p$. Suppose also that the $g_i$ are ordinary at $p$, and that each $g_i$ has trivial \n central character, and the corresponding residual representation is absolutely irreducible and $p$-distinguished.\n Let $\\alpha_{i,p}$ denote the eigenvalue of the ${\\mathfrak p}$-stabilized newform associated to $f_i$.\n Let $\\psi$ denote an even character of conductor prime to $2pM_1M_2$, and let $\\eta$ denote a nontrivial\n Dirichlet character of $p$-power conductor. Let $\\Omega_1, \\Omega_2$ denote the canonical periods associated\n to the $g_1, g_2$ and the prime ${\\mathfrak p}$, as above. Thus $\\Omega_i = p^{1-k\/2} \\frac{(g_i, g_i)_{M_i}}{\\langle g_i, g_i \\rangle_{M_i}}$.\n Let $n$ denote an odd integer in the range $1\\leq n\\leq k$. Let the sets $\\Sigma_1, \\Sigma_2$ be defined as before and recall that $\\Sigma=\\Sigma_1\\cup\\Sigma_2$. Let $f_1, f_2$ denote the imprimitive forms of level $N$, associated to the forms $g_1, g_2$, and the set\n $\\Sigma$. We assume also that the coefficient ring ${\\mathcal O}$ contains the values of the character $\\psi$. \n \n \\begin{Th}\\label{special values congruence} Let the hypotheses be as above. \n Then there exist units $u_i$, depending only on $g_i$, and $n$, such that we have the congruence \n \n $$u_1 \\cdot \\left( \\frac{p^{n-1}}{\\psi(p)\\alpha_{1,p}^2}\\right)^{m_\\chi} \n\\cdot\\Gamma(n)\\cdot G(\\overline{\\eta})\\cdot \\frac{D_{f_1}(\\chi,n)}{\\pi^n\\Omega_1} \\equiv\nu_2 \\cdot \\left( \\frac{p^{n-1}}{\\psi(p)\\alpha_{2,p}^2}\\right)^{m_\\chi} \n\\cdot \\Gamma(n) \\cdot G(\\overline{\\eta})\\cdot\\frac{ D_{f_2}(\\chi,n)}{\\pi^n\\Omega_2} \\pmod{{\\mathfrak p}}.\n$$\n \\end{Th}\n \n \\begin{proof} This follows from the continuity of the functional $S(N, {\\mathcal O})\\rightarrow {\\mathcal O} $ given by $ x \\mapsto \n ( x, \\mathcal{H}_\\chi(n)\\circ e)_N$. \n \\end{proof}\n \n \n\n\n\\subsection{The primitive L-function and $p$-adic interpolation}\n\\label{primitive-ss-Lfunction}\nWe now write down the relationships between the primitive and variously imprimitive L-functions, and \nthe interpolation properties that characterize the $p$-adic L-functions.\nFor notational\nsimplicity, let us fix the $p$-stabilized\nnewform $g_0$, and write the level of $g_0$ as $M_0$. The corresponding $p$-stabilized newform will be denoted by $g$\nand its level shall be denoted by $M$.\nFor each prime $q$, we have associated to $g$, a complex\nrepresentation $\\pi_q$ of $\\op{GL}_2(\\mathbb{Q}_q)$ \nassociated to $g$. \nThe first task is to work out the Euler factors of \nthe symmetric square lift $\\Pi_q$ of $\\pi_q$ to $\\op{GL}_3$. This is all contained in \\cite{GJ78}, \nand is recapitulated in Section 1 of \\cite{schmidt88},\nespecially Lemmas 1.5 and 1.6,\nbut some translation is required. We notice first of all\nthat the representations $\\Pi$ and $\\Sigma$ considered by Schmidt\nare not exactly the symmetric square of \\cite{lz16}, and that his \nnormalization introduces an inverse when comparing\nwith the Euler product of Shimura considered here. The exact \nrelationship is given in the last line of page 603. For us, the\npoint is that the Euler factors of our $D_g(\\chi_0, s)$ coincide\nwith Schmidt's $L(s-k+1, \\Sigma\\otimes\\chi^{-1})$. for any \nprimitive (in our case even and non-quadratic) Dirichlet\ncharacter $\\chi_0$ with corresponding idele class character $\\chi$,\nat almost all primes. With this normalization in mind, one can\nread off the Euler factors at the bad primes for the automorphic\nrepresentation $\\Pi$ from Schmidt's Lemmas 1.5 and 1.6. To state \nthe result, let us write $\\mathscr{L}(r_{g_i}, \\chi, s)=\\prod_q\nP_\\ell(r_{g_i}, \\chi, q^{-s})$ to denote the complex L-function\nassociated to the Galois representation $r_{g_i}\\otimes\\chi$. This function\nis denoted by $L(\\op{Sym}^2 g\\otimes\\chi, s)$ in \\cite{lz16}. \nFor all but finitely primes $q$, we have \n$$P_q = \\left( (1-\\chi(q)\\alpha_q\\beta_q q^{-s})(1-\\chi(q)\\beta_q^2q^{-s})(1-\\chi(q)\\alpha_q^2q^{-s})\\right)^{-1}.$$\nSchmidt's formulae show that for any choice of the set $S_0$ of \nbad primes, that the quantity\n$$\\frac{D_f(\\chi, s)}{\\mathscr{L}(r_{g_i}, \\chi, s)}=\\prod_q P_q(\\chi, q^{-s})$$\nis a product of polynomials $P_q(\\chi, X)$ in the variables $X=q^{-s}$, \nfor $q\\in\\Sigma$,\nwhose only zeroes lie on the line $s=k-1$ (compare \\cite{lz16}, Proposition 2.1.5). \n\nNow, if $q$ is any prime distinct from $p$, we may write \n$q=\\eta_1(q)\\eta_2(q)$, where $\\eta_1:\\mathbb{Z}_p^\\times\\rightarrow \\mu_{p-1}$\nthe Teichm\\\"uller character, and $\\eta_2$ is the projection to the group\n$1+p\\mathbb{Z}$. The quantity $\\eta_2(q)^s$ makes sense for all rational \nintegers $s$. Then, making the substitution $X=\\eta_2(q)^{-s}$, where the variable\n$s$ is a rational integer, in the polynomial $P_q(\\chi, X)$ mentioned above\ngives an Iwasawa function of $s\\in \\mathbb{Z}$ whose values coincide \nwith the complex polynomial $P_q(q^{-s})$ for all integers $s$ divisible by\n$p-1$. \n\nIn the setup of $p$-adic L-functions, we have $\\chi = \\psi\\eta$, where \n$\\psi$ has conductor prime to the level, and $\\eta$ has $p$-power \nconductor. A glance at the formulae in Schmidt shows that in fact\none has $P_q(\\psi, \\eta_2(q)^s) = P_q(\\psi\\eta_1^{-s}, q^s)$ holds for\nany rational integer value of $s$. More generally, if\n$\\eta$ is any character $\\mathbb{Z}_p^\\times$ of the form $x \\mapsto\\eta'(x) x^{-s}$\nwhere $\\eta'$ has finite order and $s$ is a rational integer, then \nwe have $P_q(\\psi, \\eta(q)) = P_q(\\psi\\eta'\\eta_1(q)^{s}, q^{-s})$. Characters\nof the given form are dense in the group of continuous characters of $\\mathbb{Z}_p^\\times$,\nand if we restrict to the case where $s$ is divisible by $p-1$, then we get characters of \nthe Galois group of the cyclotomic $\\mathbb{Z}_p$-extension, by class field theory. Thus\nwe may view the polynomials $P_q$ as being elements of the completed group ring $\\Lambda\n=\\mathcal{O}[[\\mathbb{Z}_p]]$. Here we remark that we are identifying the additive group\n$\\mathbb{Z}_p$ with the multiplicative group $1+p\\mathbb{Z}_p$, via some choice\nof topological generator of the latter. \nIn general, given $\\lambda\\in\\Lambda$, and \nany continuous $\\mathbb{C}_p$-valued character $\\eta$ of $\\mathbb{Z}_p$,\nwe will write $\\lambda(\\eta)$ for the evaluation of $\\lambda$ at $\\eta$. \n\nRecall that $g$ is a $p$-stabilized newform of even weight $k$, with trivial nebentype character,\nand that $\\psi$ denotes an even non-quadratic character of conductor prime to\nthe level. As above, we write $\\eta$ to denote an even Dirichlet character of $p$-power\nconductor, and of $p$-power order (so that the tame part is trivial). Thus\n$\\eta$ may be identified with a character of $1+p\\mathbb{Z}_p$, as above. For each odd\ninteger $n$ in the range $1\\leq n\\leq k-1$, we write $\\eta_n$ for the character\nof $1+p\\mathbb{Z}_p$ given by $x\\mapsto \\eta(x)x^n$. \n\n\nThe primitive $p$-adic L-function associated to the newform $g_0$ of level $M$ \nand the representation\n$r_g\\otimes\\psi$ is an element $L^{\\op{an}}(r_g\\otimes\\psi)$ of $\\Lambda=\\mathcal{O}[[1+p\\mathbb{Z}_p]]$\ncharacterized by \n\\begin{equation}\n\\label{messy-definition}\n\\eta_n(L^{\\op{an}}(r_g\\otimes\\psi)) = \\frac{(-1)^{n-k+1}\\eta(-1)\\Gamma(n)}{{4^k}}\n\\times E_p(n, \\eta)\\frac{G(\\eta)}{(2\\pi i)^{n-k+1}}\\frac{\\mathscr{L}(r_g\\otimes \\psi\\eta_1^{-n}\\eta^{-1}, n)}{\\pi^{k-1}\\Omega_g}.\n\\end{equation}\nfor $n$ odd, $1\\leq n\\leq k-1$, and $\\eta$ even. The period\nin the formula is given by\n$$\\Omega_{g_0} =p^{1-k\/2}\\frac{(g, g)_{M_0}}{\\langle g, g\\rangle_{M_0}}.$$ The Euler factor\n$E_p(n, \\chi)$ is given by\n$$E_p(n,\\chi) = (p^{n-1}\\psi(p)^{-1}\\alpha_p^{-2})^{m_\\chi}$$\nif $\\eta$ is nontrivial and has conductor $p^{m_\\chi}> 1$. If $\\eta$ is trivial and $g$ has level prime to $p$,\nthen \n$$E_p(n, \\eta) = (1- p^{n-1}\\psi(p)^{-1}\\alpha_p^{-2})(1-\\psi(p)p^{k-1-n})\n(1-\\psi(p)\\beta_p^2p^{-n}).$$\nA similar formula holds when $k=2$ and $g$ has level divisible by the first power of $p$; we omit it here, as we\ndo not need it. \n\nObserve\nthat our formula is precisely the same as that in \\cite{lz16}, except that \nwe have scaled by the constant factor $p^{1-k\/2}( g_0, g_0)_{M_0} = \\op{unit} \\cdot (g, g)_M$. It is clear\nthat if such a function exists, then it is characterised by the validity\nof the formula above, for any infinite collection of characters of the form\n$\\eta_n$, for varying $n$ and $\\eta$. \n\nNext we want to define the various imprimitive L-functions. Let $T$ denote any set of prime numbers.\nThe $T$-imprimitive $L$-function $\\mathscr{L}_T(r_g\\otimes\\psi, s)$ is defined by the Euler product (\\ref{shimura-euler-product}).\nThen $L_T^{\\op{an}}(r_g\\otimes\\psi)$ is an element of $\\Lambda$ \ncharacterized by the analogue of (\\ref{messy-definition}) where\none replaces $\\mathscr{L}(r_g\\otimes\\psi\\eta^{-1}, n)$ with $\\mathscr{L}_T(r_g\\otimes\\psi\\eta^{-1}, n)$. \n\n\n\nThe existence of $L^{\\op{an}}(r_g\\otimes\\psi)$ and $L^{\\op{an}}_T(r_g\\otimes\\psi)$ (for $T$ being the empty set)\nwas proven by Schmidt, under our hypotheses.\nHe states in his work that very similar results were obtained by Hida, but never\npublished. Schmidt first established the existence of the imprimitive L-function\nunconditionally \\cite{schmidt86}, and then proved the existence of the primitive \nL-function under\nsome conditions which are subsumed by our hypothesis that $\\psi$ be non-quadratic\n\\cite{schmidt88}. As we have remarked, he was unable to make precise the period, and stated\nhis results with a period that was only determined up to an unknown constant\nmultiple of $(g, g)_M$. He was therefore unable to deduce integrality. Schmidt did not \ncontstruct $L^{\\op{an}}_T(r_g\\otimes\\psi)$ when $T\\neq\\emptyset$, but in fact that follows easily:\none simply multiplies the L-function for the empty set (which exists) by the appropriate Euler factors \nfrom (\\ref{shimura-euler-product}), each of which is represented by an element of $\\Lambda$. Thus we may assume \nexistence of $L^{\\op{an}}_T(r_g\\otimes\\psi)$: it is an element of $\\Lambda\\otimes\\mathbb{Q}$ characterized by the formula\n\\begin{equation}\n\\label{messy-definition-2}\n\\eta_n(L^{\\op{an}}(r_g\\otimes\\psi)) = \\frac{(-1)^{n-k+1}\\eta(-1)\\Gamma(n)}{{4^k}}\n\\times E_p(n, \\eta)\\frac{G(\\eta)}{(2\\pi i)^{n-k+1}}\\frac{\\mathscr{L}_T(r_g\\otimes\\psi\\eta_1^{-n}\\eta^{-1}, n)}{\\pi^{k-1}\\Omega_g}\n\\end{equation}\nfor almost all characters $\\eta$ of $1+p\\mathbb{Z}_p$. \n\nThen, the relationship between\nthe primitive and imprimitive $p$-adic L-functions is given by\n\\begin{equation}\n\\label{prim-imprim}\nL^{\\op{an}}_{S_0}(r_g\\otimes\\psi) = \\prod_{q\\in S_0} P_q\\cdot L^{\\op{an}}(r_g\\otimes\\psi).\n\\end{equation}\nA priori, both L-functions above are elements of $\\Lambda\\otimes\\mathbb{Q}$.\n\nNow assume that we have $T=S_0$, where $S_0$ is a set associated to the choice of a set $\\Sigma$ of primes\nsatisfying the conditions in the introduction of this paper. Furthermore, assume that Hypothesis \\ref{hypothesis ihara}\nholds. \n\nWe can now give the proof of the various remaining result on $p$-adic L-functions\nstated in the introduction. \n\nWe start with a simple lemma, which follows directly from the explicit\nformulae in \\cite{schmidt88}, or \nthe observation there on page 605\nthat the polynomials $P_q$ all satisfy $P(0)=1$. \n\n\\begin{Lemma}\nFor any prime $q\\in S_0$, the element $P_q\\in\\Lambda$ has $\\mu$-invariant zero.\n\\end{Lemma}\n\n\\begin{Corollary} We have $\\mu_{S_0}^{\\op{an}}=0\\iff \\mu^{\\op{an}}=0$.\n\\end{Corollary}\n\nThis lemma implies Proposition \\ref{analytic-invariants-intro},\nsimply by taking $g=g_i$, and $\\sigma^{(q)}_i$ to be the degree of the polynomial $P_q$ \nassociated above to $g_i$ at $q$, and using the fact that the \n$\\mu$-invariant of $P_Q$ is zero\nin the formula (\\ref{prim-imprim}).\n\nNext we deal with integrality properties, as stated in Theorem \\ref{integrality-thm-intro}.\nWe claim that in fact both $L^{\\op{an}}_{S_0}(r_g\\otimes\\psi) $ and $L^{\\op{an}}(r_g\\otimes\\psi)$\nlie in $\\Lambda$ are are integral. First consider the imprimitive case.\nThen it follows from the formulae (\\ref{integrality-formula-1})\nand (\\ref{integrality-formula-2}) that the right hand side of \n(\\ref{messy-definition-2}) is integral, for almost all characters $\\eta_n$. Then the \nWeierstrass preparation theorem, applied\nto $L^{\\op{an}}_{S_0}(r_g\\otimes\\psi)$, shows that the latter is an element\nof $\\Lambda$. As for the primitive case, it follows from the imprimitive case,\nand the basic relation fact that the polynomials $P_q$ are integral and\nhave $\\mu$-invariant zero. This proves Theorem \\ref{integrality-thm-intro}.\n\nFinally, we have to deal with congruences. Let $g_1, g_2$ be $p$-congruent newforms satisfying our running conditions.\nLet $S$ denote\nany set of primes including $2$, and the set of primes dividing $M_1M_2$, and let\n$S_0=S\\backslash\\{p\\}$ be as above. We claim that we have\n\n\\begin{Proposition}\\label{p-adic LFs congruent} Let the notation be as above. Then we have\n $L_{S_0}^{\\op{an}}(r_{g_1}\\otimes\\psi\\eta) \\equiv u L_{S_0}^{\\op{an}}(r_{g_2}\\otimes\\psi\\eta)\\pmod{{\\mathfrak p}}$, where $u$ is a $p$-adic unit and the congruence\n is that of elements in the completed group algebra ${\\mathcal O}[[{\\mathbb{Z}}_p^\\times]]]$. \n \\end{Proposition}\n \n \\begin{proof} Let $\\Sigma_1, \\Sigma_2$ denote the sets associated to $g_1, g_2$ in Section \\ref{congruence-section}. \n As we have remarked\n$\\mathscr{L}_{\\Sigma_i}(r_{g_i}\\otimes\\psi\\eta, s) = \\mathscr{L}_{S_0}(r_{g_i}\\otimes\\psi\\eta, s) $ as complex L-functions, for each $i$. \nThen the result follows from the congruence of special values in Theorem \\ref{special values congruence}, and the Weierstrass preparation theorem, since\nwe have $D_{f_i}( \\chi, s) = \\mathscr{L}_{S_0}(r_{g_i}\\otimes\\chi, s)$ as complex $L$-functions, and $i=1,2$. \n \\end{proof}\n\nFinally, we observe that the analytic part of Theorem \\ref{intro-thm} follows immediately, since two congruent Iwasawa functions with \n$\\mu$-invariant zero necessarily have the same $\\lambda$-invariant, and that if one has $\\mu$-invariant zero, then so does the other.\n\n\\section{Imprimitive Iwasawa Invariants: the algebraic side}\\label{s 3}\n\\par Throughout, let $p\\geq 5$ be a fixed prime and $g$ be a normalized Hecke-eigencuspform of weight $k\\geq 2$ on the congruence group $\\Gamma_0(M)$. Denote the number field generated by the field of Fourier coefficients of $g$ by $L$. For each prime $q$, choose an embedding $\\iota_q:\\bar{\\mathbb{Q}}\\hookrightarrow \\bar{\\mathbb{Q}}_q$. Let $\\mathfrak{p}|p$ be the prime of $L$, such that the inclusion of $L$ in $L_\\mathfrak{p}$ is compatible with $\\iota_p$. Denote by $K$ the completion of $L$ at $\\mathfrak{p}$, and $\\mathcal{O}$ the valuation ring of $K$. Associated with $g$ is the continuous Galois representation $\\rho_g:\\op{Gal}(\\bar{\\mathbb{Q}}\/\\mathbb{Q})\\rightarrow \\op{GL}_2(K)$. Let $\\op{V}_g\\simeq K^2$ be the underlying $2$-dimensional vector space on which $\\op{Gal}(\\bar{\\mathbb{Q}}\/\\mathbb{Q})$ acts via $K$-linear automorphisms. Fix a Galois stable $\\mathcal{O}$-lattice $\\op{T}_g$ inside $\\op{V}_g$. Let $\\mathbb{F}$ be the residue field of $\\mathcal{O}$. The mod-$\\mathfrak{p}$ reduction of $\\rho_g$ is denoted by\n\\[\\bar{\\rho}_g:\\op{G}_{\\mathbb{Q}}\\rightarrow \\op{GL}_2(\\mathbb{F}),\\]and it follows from the Brauer-Nesbitt theorem that the semi-simplification of $\\bar{\\rho}_g$ is independent of the choice of lattice $\\op{T}_g$.\nThroughout, we make the following assumptions on $g$:\n\\begin{enumerate}\n \\item $g$ is ordinary at $\\mathfrak{p}$,\n \\item $\\bar{\\rho}_g$ is absolutely irreducible.\n\\end{enumerate}\nSince $\\bar{\\rho}_g$ is absolutely irreducible, the choice of Galois stable lattice $\\op{T}_g$ is unique.\nLetting $\\op{G}_q$ denote the Galois group $\\op{Gal}(\\bar{\\mathbb{Q}}_q\/\\mathbb{Q}_q)$, we note that the choice of embedding $\\iota_q$ prescribes an inclusion of $\\op{G}_q$ into the absolute Galois group $\\op{G}_{\\mathbb{Q}}$. Let $\\chi_{\\op{cyc}}:\\op{G}_p\\rightarrow \\mathcal{O}^\\times$ denote the $p$-adic cyclotomic character. Since $g$ is ordinary at $\\mathfrak{p}$, there is a short exact sequence \n\\[0\\rightarrow \\op{T}_g^+\\rightarrow \\op{T}_g\\rightarrow \\op{T}_g^-\\rightarrow 0\\] of $\\op{G}_p$-stable $\\mathcal{O}$-lattices such that there is an unramified characters $\\gamma_1, \\gamma_2:\\op{G}_{p}\\rightarrow \\mathcal{O}^{\\times}$ for which\n\\[\\op{T}_g^+\\simeq \\mathcal{O}(\\chi_{\\op{cyc}}^{k-1} \\gamma_1) \\text{ and } \\op{T}_g^-\\simeq \\mathcal{O}(\\gamma_2).\\]Fix a finite order even character $\\psi$ of conductor coprime to $Mp$ and conductor $c_\\psi$. Consider the lattice $\\textbf{T}_g:=\\op{Sym}^2 \\op{T}_g$ and the symmetric square representation \n\\[r_g\\otimes \\psi:=\\op{Sym}^2(\\rho_g)\\otimes \\psi:\\op{Gal}(\\bar{\\mathbb{Q}}\/\\mathbb{Q})\\rightarrow \\op{GL}_3(\\mathcal{O}).\\]\nSet $\\mathbf{V}_g:=\\textbf{T}_g\\otimes \\mathbb{Q}_p$ and $\\mathbf{A}_g:=\\mathbf{V}_g\/\\textbf{T}_g$.\nThe representation $\\textbf{T}_g$ is $\\mathfrak{p}$-ordinary, i.e., is equipped with a filtration \n\\[\\textbf{T}_f=\\mathcal{F}^0(\\textbf{T}_g)\\supset \\mathcal{F}^1(\\textbf{T}_g)\\supset \\mathcal{F}^2(\\textbf{T}_g)\\supset \\mathcal{F}^3(\\textbf{T}_g)=0.\\] For $i=1,2,3$, and unramified characters $\\delta_j$, we have that \\[\\begin{split}&\\op{gr}_0(\\textbf{T}_g)\\simeq \\mathcal{O}(\\chi_{\\op{cyc}}^{2k-2}\\delta_0),\\\\\n&\\op{gr}_1(\\textbf{T}_g)\\simeq \\mathcal{O}(\\chi_{\\op{cyc}}^{k-1}\\delta_1),\\\\\n& \\op{gr}_2(\\textbf{T}_g)\\simeq \\mathcal{O}(\\delta_2).\n\\end{split}\\]\n\n\\par With this notation in place, we consider Hecke eigencuspforms $g_1$ and $g_2$ of the same weight $k\\geq 2$ and trivial nebentype character. Setting $L$ to be the number field generated by the Fourier coefficients of $g_1$ and $g_2$, let $\\mathfrak{p}$ be the prime of $L$ above $p$ corresponding to the choice of $\\iota_p$. Assume that $\\bar{\\rho}_{g_i}$ is absolutely irreducible and that the following equivalent conditions are satisfied.\n\\begin{enumerate}\n \\item The residual representations are isomorphic: $\\bar{\\rho}_{g_1}\\simeq \\bar{\\rho}_{g_2}$.\n \\item For all primes $q\\neq p$ coprime to the level of $g_1$ and $g_2$, the Fourier coefficients satisfy the congruence\n \\[a(q,g_1)\\equiv a(q,g_2)\\mod{\\varpi}.\\]\n \\item Letting $M_i$ denote the level of $g_i$, assume that the conductor of $\\psi$ is coprime to $M_1 M_2 p$.\n\\end{enumerate}\nNote that $\\textbf{T}_{g_i}$ fits into a short exact sequence \n\\[0\\rightarrow \\textbf{T}_{g_i}^+\\rightarrow \\textbf{T}_{g_i}\\rightarrow \\textbf{T}_{g_i}^-\\rightarrow 0,\\]where $\\textbf{T}_{g_i}^+=\\mathcal{F}^1(\\textbf{T}_{g_i})$ and $\\textbf{T}_{g_i}^-=\\textbf{T}_{g_i}\/\\textbf{T}_{g_i}^+$. Set $\\mathbf{A}_i$ (resp. $\\mathbf{A}_i^{\\pm}$) to denote the $p$-divisible Galois module $\\textbf{T}_{g_i}\\otimes \\mathbb{Q}_p\/\\mathbb{Z}_p$ (resp. $\\textbf{T}_{g_i}^{\\pm}\\otimes \\mathbb{Q}_p\/\\mathbb{Z}_p$). Note that $\\mathbf{A}_i\\simeq (K\/\\mathcal{O})^d$, where $d=3$. Let $d^{\\pm}$ be the dimensions (over $K$) of the $\\pm$ eigenspaces for complex conjugation on $\\mathbf{V}_{g_i}$, we have that $d^+=2$ and $d^-=1$ and that $\\mathbf{A}_i^{\\pm}\\simeq (K\/\\mathcal{O})^{d^{\\pm}}$. Note however that the action of complex conjugation on $\\mathbf{A}_i^{\\pm}$ is not prescribed.\n\n\\par Let $\\mathbb{Q}_n$ be the subfield of $\\mathbb{Q}(\\mu_{p^{n+1}})$ degree $p^n$ and set $\\mathbb{Q}_{\\op{cyc}}:=\\bigcup_{n\\geq 0} \\mathbb{Q}_n$. Letting $\\Gamma:=\\op{Gal}(\\mathbb{Q}_{\\op{cyc}}\/\\mathbb{Q})$ fix an isomorphism $\\op{Gal}(\\mathbb{Q}_{\\op{cyc}}\/\\mathbb{Q})\\xrightarrow{\\sim} \\mathbb{Z}_p$. The extension $\\mathbb{Q}_{\\op{cyc}}$ is the cyclotomic $\\mathbb{Z}_p$-extension of $\\mathbb{Q}$. The Iwasawa algebra $\\Lambda$ is defined as the following inverse limit $\\Lambda:=\\varprojlim_n \\mathbb{Z}_p[\\op{Gal}(\\mathbb{Q}_n\/\\mathbb{Q})]$, and is isomorphic to the formal power series ring $\\mathbb{Z}_p\\llbracket T\\rrbracket$. For $i=1,2$, letting $M_i$ be the conductor of $g_i$, fix the set $S$ to consist of primes $q$ that divide $c_\\psi M_1M_2p$. In what follows, set $\\mathbf{A}_{i,\\psi}:=\\mathbf{A}_i\\otimes \\psi$. The $p$-primary Selmer group $\\op{Sel}_{p^\\infty}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ is defined as the kernel of the following restriction map\n\\[\n\\lambda_i:H^1\\left(\\mathbb{Q}_{S}\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_{i,\\psi}\\right)\\rightarrow \\bigoplus_{q\\in S}\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}).\n\\]\nHere for each prime $q\\neq p$, the local term is defined as follows\n\\[\n\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}) = \\bigoplus_{\\eta|q} H^1( {\\mathbb{Q}}_{\\op{cyc},\\eta}, \\mathbf{A}_{i,\\psi}),\n\\]\nwhere $\\mathbb{Q}_{\\op{cyc}, \\eta}$ is the union of all completions of number fields contained in $\\mathbb{Q}_{\\op{cyc}}$ at the prime $\\eta$.\nThe definition at the prime $q=p$ is more subtle, set\n\\[\n\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}) = H^1( \\mathbb{Q}_{\\op{cyc}, \\eta_p}, \\mathbf{A}_{i,\\psi})\/\\mathcal{L}_{\\eta_p}\n\\]\nwith \n\\[\n\\mathcal{L}_{\\eta_p} = \\ker\\left( H^1( \\mathbb{Q}_{\\op{cyc}, \\eta_p}, \\mathbf{A}_{i,\\psi}) \\rightarrow H^1( I_{\\eta_p}, \\mathbf{A}_{i,\\psi}^{-})\\right).\n\\]\nHere $\\eta_p$ is the unique prime of $\\mathbb{Q}_{\\op{cyc}}$ above $p$, set $I_{\\eta_p}$ denotes the inertia group at $\\eta_p$. The following is a special case of \\cite[Conjecture 4.1]{CS}.\n\\begin{Conjecture}[Coates-Schmidt]\nThe Selmer group $\\op{Sel}_{p^\\infty}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ is a cotorsion $\\Lambda$-module.\n\\end{Conjecture}\nNote that it is crucial that $\\psi$ is even. This conjecture has been settled by Loeffler and Zerbes in \\cite{lz16}.\n\\begin{Proposition}\nLet $\\mathbf{A}_{i,\\psi}$ be as above. The localization map $\\lambda_i$ is surjective.\n\\end{Proposition}\n\\begin{proof}\nWe let $\\textbf{T}_{i,\\psi}^*:=\\op{Hom}(\\mathbb{T}_{g_i}\\otimes \\psi, \\mu_{p^{\\infty}})$. Note that $\\bar{\\rho}_{g_i}$ is assumed to irreducible as a Galois module. It is easy to show that $H^0(\\mathbb{Q}, \\textbf{T}_{i,\\psi}^*)=0$. Since $\\op{Gal}(\\mathbb{Q}_{\\op{cyc}}\/\\mathbb{Q})$ is pro-$p$, it follows that $H^0(\\mathbb{Q}, \\textbf{T}_{i,\\psi}^*)=0$ as well and thus in particular, is finite. The result follows from \\cite[Proposition 2.1]{GV00}.\n\\end{proof}\nDenote by $S_0:=S\\backslash \\{p\\}$ and introduce the $S_0$-imprimitive Selmer group to be the Selmer group obtained by imposing conditions only at $p$.\n\n\\begin{Definition}\nThe \\emph{imprimitive Selmer group} is defined by:\n\\[\n\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}) = \\ker\\left( H^1\\left( \\mathbb{Q}_{S}\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_{i,\\psi}\\right)\\xrightarrow{\\lambda_i^p} \\mathcal{H}_p(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})\\right).\n\\]\n\\end{Definition}\nSince the map defining the Selmer group is surjective, it follows that\n\\begin{equation}\n\\label{quotient}\n\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})\/\\op{Sel}_{p^\\infty}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})\\simeq \\bigoplus_{q\\in S_0}\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}).\n\\end{equation}\n\\begin{Lemma}\\label{local mu is 0}\nLet $q\\neq p$, then $\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ is a cofinitely generated and cotorsion $\\Lambda$-module with $\\mu$-invariant equal to $0$. \n\\end{Lemma}\n\\begin{proof}\nIt suffices to show that $\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ is a cofinitely generated as a $\\mathbb{Z}_p$-module, or equivalently, the $p$-torsion subgroup $\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})[\\mathfrak{p}]$ is finite. Consider the short exact sequence\n\\[0\\rightarrow \\bigoplus_{\\eta|q} \\frac{H^0(\\mathbb{Q}_{\\op{cyc}, \\eta}, \\mathbf{A}_{i,\\psi})}{p\\left( H^0(\\mathbb{Q}_{\\op{cyc}, \\eta}, \\mathbf{A}_{i,\\psi})\\right)}\\rightarrow \\bigoplus_{\\eta|q} H^1( \\mathbb{Q}_{\\op{cyc}, \\eta}, \\mathbf{A}_{i,\\psi}[\\mathfrak{p}]) \\rightarrow \\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})[\\mathfrak{p}]\\rightarrow 0.\\]\nThe set of primes $\\eta|q$ of $\\mathbb{Q}_{\\op{cyc}}$ is finite and so is $ H^1( \\mathbb{Q}_{\\op{cyc}, \\eta}, \\mathbf{A}_{i,\\psi}[\\mathfrak{p}])$. The result follows.\n\\end{proof}\nLet $\\sigma_i^{(q)}$ denote the $\\mathbb{Z}_p$-corank of $\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ for $q\\in S_0$.\nSet $\\lambda^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ to be the $\\lambda$-invariant of $\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$. It follows from the structure theory of $\\Lambda$-modules that \n\\[\\lambda^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=\\op{corank}_{\\mathbb{Z}_p} \\left(\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})\\right).\\]\nIt follows from $\\eqref{quotient}$ that the following relation is satisfied:\n\\begin{equation}\n\\label{relating im primitive and classical lambda invariant}\n\\lambda^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=\n\\lambda(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}) + \\sum_{q\\in S_0} \\sigma_i^{(q)}.\n\\end{equation}\nAnalogous to the classical and imprimitive Selmer group, we also define the \\emph{reduced} classical and imprimitive Selmer groups which we denote by $\\op{Sel}_{p^\\infty}(\\mathbf{A}_{i,\\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})$ and $\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})$, respectively. \n\\par For $q\\in S_0$ set\n\\[\n\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}}) := \\prod_{q^\\prime|q} H^1\\left( {\\mathbb{Q}_{\\op{cyc}}}_{,q^\\prime}, \\mathbf{A}_{i,\\psi}[\\mathfrak{p}]\\right),\n\\]\nand for $q=p$, set\n\\[\\mathcal{H}_q(\\mathbf{A}_{i,\\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}}):=\\bigoplus_{q'|q} H^1(K_{\\infty, q'}, \\mathbf{A}_{i,\\psi}[\\mathfrak{p}])\/\\overline{L}_{q'},\\] where \n\\[\\overline{L}_{q'}:=\\ker\\left( H^1\\left( {\\mathbb{Q}_{\\op{cyc}}}_{,q^\\prime}, \\mathbf{A}_{i,\\psi}[\\mathfrak{p}]\\right) \\rightarrow H^1\\left( I_{q^\\prime}, \\mathbf{A}_{i,\\psi}^-[\\mathfrak{p}]\\right)\\right).\\]\n\\begin{Definition}\nThe reduced imprimitive Selmer group is defined as follows\n\\[\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}):=\\op{ker} \\left(H^1\\left( \\mathbb{Q}_{S}\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_{i,\\psi}\\right)\\xrightarrow{\\overline{\\theta}_0} \\mathcal{H}_p(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})\\right)\\]\n\\end{Definition}\n\n\n\\begin{Proposition}\n\\label{Kim 2.10}\nFor $i=1,2$, we have a natural isomorphism\n\\[\n\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}}) \\simeq \\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})[\\mathfrak{p}].\n\\]\n\\end{Proposition}\n\\begin{proof}\nWe consider the diagram relating the two Selmer groups\n\\[\n\\begin{tikzcd}[column sep = small, row sep = large]\n0\\arrow{r} & \\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}) \\arrow{r} \\arrow{d}{f} & H^1(\\mathbb{Q}_{S}\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_{i,\\psi}[\\mathfrak{p}]) \\arrow{r} \\arrow{d}{g} & \\op{im} \\overline{\\theta}_0 \\arrow{r} \\arrow{d}{h} & 0\\\\\n0\\arrow{r} & \\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}}) [\\mathfrak{p}] \\arrow{r} & H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_{i,\\psi})[\\mathfrak{p}] \\arrow{r} &\\left(\\op{im} \\theta_0\\right)[\\mathfrak{p}]\\arrow{r} & 0,\n\\end{tikzcd}\\]\nwhere the vertical maps are induced by the Kummer sequence. Note that \\[H^0(K,\\mathbf{A}_{i,\\psi}[\\mathfrak{p}])=H^0(\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_{i,\\psi}[\\mathfrak{p}])^{\\Gamma}=0.\\]\nSince $\\mathbb{Q}_{\\op{cyc}}\/\\mathbb{Q}$ is a pro-$p$ extension, we deduce that $H^0(\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_{i,\\psi})=0$ and therefore $g$ is injective.\nOn the other hand, it clear that $g$ is surjective.\n\nIt only remains to show that $h$ is injective.\nFor $q=p$, denote by $\\iota_q$ the natural map\n\\[\\iota_q: \\mathcal{H}_q(\\mathbf{A}_{i,\\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow \\mathcal{H}_q(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})[\\mathfrak{p}].\\]\nConsider the commutative square with injective horizontal maps\n\\[\n\\begin{tikzcd}[column sep = small, row sep = large]\n & \\mathcal{H}_q(\\mathbf{A}_{i, \\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}}) \\arrow{r} \\arrow{d}{\\iota_q} & \\bigoplus_{q'|q} H^1\\left( I_{q^\\prime}, \\mathbf{A}_{i, \\psi}^-[\\mathfrak{p}]\\right) \\arrow{d}{j_q} \\\\\n & \\mathcal{H}_q(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})[\\mathfrak{p}]\\arrow{r} & \\bigoplus_{q'|q} H^1\\left( I_{q^\\prime}, \\mathbf{A}_{i, \\psi}^-\\right)[\\mathfrak{p}].\n\\end{tikzcd}\\]\nSince $\\mathbf{A}_{i, \\psi}^-$ is unramified at all primes $q\\in S_p$, it follows that $H^0(I_{q'}, \\mathbf{A}_{i, \\psi}^-)=\\mathbf{A}_{i, \\psi}^-$ is divisible.\nThe kernel of the map \n\\[\\iota_q: H^1(I_{q'}, \\mathbf{A}_{i, \\psi}^-[\\mathfrak{p}])\\rightarrow H^1(I_{q'}, \\mathbf{A}_{i, \\psi}^-)[\\mathfrak{p}]\\] is $H^0(I_{q'}, \\mathbf{A}_{i, \\psi}^-)\/p=0$.\n\\end{proof}\n\n\\begin{Lemma}\nThe isomorphism $ \\mathbf{A}_{1,\\psi}[\\mathfrak{p}]\\simeq \\mathbf{A}_{2,\\psi}[\\mathfrak{p}]$ of Galois modules induces an isomorphism of residual Selmer groups \n\\[\n\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{1,\\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\simeq \\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{2,\\psi}[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}}).\n\\]\n\\end{Lemma}\n\\begin{proof}\nNote that the $\\op{G}_{\\mathbb{Q}_p}$-action on $\\mathbf{A}_{i, \\psi}^+[\\mathfrak{p}]$ is ramified and that on $\\mathbf{A}_{i, \\psi}^-[\\mathfrak{p}]$ is via an unramified character. Let $\\Phi:\\mathbf{A}_{1,\\psi}[\\mathfrak{p}]\\xrightarrow{\\sim} \\mathbf{A}_{2,\\psi}[\\mathfrak{p}]$ be a choice of isomorphism of Galois modules, it is easy to see that $\\Phi$ induces an isomorphism \n\\[\\Phi:\\mathbf{A}_{1,\\psi}^+[\\mathfrak{p}]\\xrightarrow{\\sim} \\mathbf{A}_{2,\\psi}^+[\\mathfrak{p}].\\] As a result, we have an isomorphism of $\\op{G}_{\\mathbb{Q}_p}$-modules $\\mathbf{A}_{1,\\psi}^-[\\mathfrak{p}]\\simeq \\mathbf{A}_{2,\\psi}^-[\\mathfrak{p}]$.\nClearly, $\\Phi$ induces an isomorphism $H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_{1,\\psi}[\\mathfrak{p}])\\xrightarrow{\\sim} H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_{2,\\psi}[\\mathfrak{p}])$.\nIt suffices to show that for $q\\in \\Sigma$, the isomorphism $\\Phi:\\mathbf{A}_{1,\\psi}[\\mathfrak{p}]\\xrightarrow{\\sim} \\mathbf{A}_{2,\\psi}[\\mathfrak{p}]$ induces an isomorphism \n\\[\n\\mathcal{H}_q(\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_{1,\\psi}[\\mathfrak{p}])\\xrightarrow{\\sim} \\mathcal{H}_q(\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_{2,\\psi}[\\mathfrak{p}]).\n\\]\nThis is clear for $q\\neq p$.\nFor $q=p$, this follows from the fact that $\\Phi$ induces an isomorphism $\\mathbf{A}_{1,\\psi}^-[\\mathfrak{p}]\\xrightarrow{\\sim} \\mathbf{A}_{2,\\psi}^-[\\mathfrak{p}]$.\n\\end{proof} \n\n\\begin{Corollary}\n\\label{p-torsion of Sigma_0 fine selmer are iso}\nThe isomorphism $\\mathbf{A}_{1,\\psi}[\\mathfrak{p}]\\simeq \\mathbf{A}_{2,\\psi}[\\mathfrak{p}]$ of Galois modules induces an isomorphism \\[\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{1,\\psi}\/\\mathbb{Q}_{\\op{cyc}})[\\mathfrak{p}]\\simeq \\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{2,\\psi}\/\\mathbb{Q}_{\\op{cyc}})[\\mathfrak{p}].\\]\n\\end{Corollary}\n\n\n\\begin{Lemma}\n\\label{lemma: the two mus are the same}\nThe $\\mu$-invariant of the Selmer group $\\op{Sel}_{p^\\infty}(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})$ coincides with that of $\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})$, i.e.,\n\\[\\mu(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})=\\mu^{S_0}(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}}).\\]\n\\end{Lemma}\n\\begin{proof}\nThe result follows from Lemma \\ref{local mu is 0}, which states that $\\mathcal{H}_q(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})$ has $\\mu=0$ for $q\\neq p$.\n\\end{proof}\n\\begin{Proposition}\\label{prop 3.10}\nLet $p\\geq 5$ be a prime and $g_1$ and $g_2$ be $p$-ordinary Hecke eigencuspforms with trivial nebentype character. Let $N_i$ be the level of $g_i$. Recall that $\\mathbf{A}_{i, \\psi}$ is the $p$-divisible symmetric square representation associated with $g_i$. Let $S_0$ be a finite set of primes containing those dividing $N_1N_2p$. Then, we have that \n\\[\\mu(\\mathbf{A}_{1,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=0\\Leftrightarrow \\mu(\\mathbf{A}_{2,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=0.\\]Moreover, if these $\\mu$-invariants are $0$, then the imprimitive $\\lambda$-invariants coincide, i.e.,\n\\[\\lambda^{S_0}(\\mathbf{A}_{1,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=\\lambda^{S_0}(\\mathbf{A}_{2,\\psi}\/\\mathbb{Q}_{\\op{cyc}}).\\] This relationship translates to the following relationship between $\\lambda$-invariants\n\\[\\lambda(\\mathbf{A}_{2,\\psi}\/\\mathbb{Q}_{\\op{cyc}})-\\lambda(\\mathbf{A}_{1,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=\\sum_{q\\in S_0} \\left(\\sigma_q^{(1)}-\\sigma_q^{(2)}\\right),\\]\nwhere $\\sigma_q^{(i)}$ is the $\\mathbb{Z}_p$-corank of $\\mathcal{H}_q(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})$. \n\\end{Proposition}\n\\begin{proof}\nSet $M_i$ to denote $\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})$.\nLemma \\ref{lemma: the two mus are the same} asserts that the $\\mu$-invariant of $M_i$ coincides with $\\mu(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})$.\nTherefore, $\\mu(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})=0$ if and only if $M_i$ is cofinitely generated as a $\\mathbb{Z}_p$-module.\nNote that $M_i$ is cofinitely generated as a $\\mathbb{Z}_p$-module if and only if $M_i[\\mathfrak{p}]$ has finite cardinality.\nCorollary \\ref{p-torsion of Sigma_0 fine selmer are iso} asserts that $M_1[\\mathfrak{p}]\\simeq M_2[\\mathfrak{p}]$; thus,\n\\[\\mu(\\mathbf{A}_{1,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=0\\Leftrightarrow \\mu(\\mathbf{A}_{2,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=0.\\]\n\\par Assume that $M_1[\\mathfrak{p}]$ (or equivalently $M_2[\\mathfrak{p}]$) is finite.\nIt follows from \\cite[Proposition 2.5]{GV00} that $M_i$ has no proper $\\Lambda$-submodules of finite index.\nIt is an easy exercise to show that $M_i$ therefore is a cofree $\\mathbb{Z}_p$-module.\nTherefore, $M_i\\simeq (\\mathbb{Q}_p\/\\mathbb{Z}_p)^{\\lambda_i}$, where $\\lambda_{i}:=\\lambda^{S_0}(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})$.\nAs a result,\n\\[\\lambda^{S_0}(\\mathbf{A}_{i, \\psi}\/\\mathbb{Q}_{\\op{cyc}})=\\op{dim}_{\\mathbb{F}_p} M_i[\\mathfrak{p}],\\] and the isomorphism $M_1[\\mathfrak{p}]\\simeq M_2[\\mathfrak{p}]$ implies that \n\\[\\lambda^{S_0}(\\mathbf{A}_{1,\\psi}\/\\mathbb{Q}_{\\op{cyc}})=\\lambda^{S_0}(\\mathbf{A}_{2,\\psi}\/\\mathbb{Q}_{\\op{cyc}}).\\]\n\n\\end{proof}\n\n\\par Let $g_1$ and $g_2$ be $\\mathfrak{p}$-congruent Hecke eigencuspforms satisfying the conditions stated in the introduction. We denote by $\\mu^{\\op{alg}}_{S_0}(r_{g_i}\\otimes \\psi)$ and $\\lambda^{\\op{alg}}_{S_0}(r_{g_i}\\otimes \\psi)$ to denote the Iwasawa invariants of the imprimitive Selmer group $\\op{Sel}_{p^\\infty}^{S_0}(\\mathbf{A}_{i,\\psi}\/\\mathbb{Q}_{\\op{cyc}})$ obtained by dropping conditions at the primes $q\\in S_0$. Denote by $\\mu^{\\op{an}}_{S_0}(r_{g_i}\\otimes \\psi)$ and $\\lambda^{\\op{an}}_{S_0}(r_{g_i}\\otimes \\psi)$ the Iwasawa invariants of the $S_0$-imprimitive $p$-adic L-function $L^{\\op{an}}_{S_0}(r_{g_i}\\otimes \\psi)$ obtained by dropping Euler factors at primes $q\\in \\Sigma$.\n\n\\begin{Proposition}\\label{boring prop}\nLet $g_1$ and $g_2$ be as above and $\\ast\\in \\{\\op{an}, \\op{alg}\\}$. Then, $\\mu^{\\ast}(r_{g_i}\\otimes \\psi)=\\mu^{\\ast}_{S_0}(r_{g_i}\\otimes \\psi)$ for $i=1,2$ and \n\\[\\lambda^{\\ast}_{S_0}(r_{g_i}\\otimes \\psi)=\\lambda^{\\ast}(r_{g_i}\\otimes \\psi)+\\sum_{q\\in S_0} \\sigma_i^{(q)}.\\]\n\\end{Proposition}\n\n\\begin{proof}\nWhen $\\ast=\\op{alg}$, the result follows from Lemma \\ref{local mu is 0} and \\eqref{relating im primitive and classical lambda invariant}. On the other hand, when $\\ast=\\op{an}$, the result follows from \\cite[Proposition 2.4]{GV00}.\n\\end{proof}\n\n\\begin{Th}\nLet $g_1$ and $g_2$ satisfy the conditions stated in the introduction. Suppose the relations $\\mu^{\\op{alg}}(r_{g_1}\\otimes \\psi)=\\mu^{\\op{an}}(r_{g_1}\\otimes \\psi)=0$ and $\\lambda^{\\op{alg}}(r_{g_1}\\otimes \\psi)=\\lambda^{\\op{an}}(r_{g_1}\\otimes \\psi)$ hold. Then, we have further equalities $\\mu^{\\op{alg}}(r_{g_2}\\otimes \\psi)=\\mu^{\\op{an}}(r_{g_2}\\otimes \\psi)=0$ and $\\lambda^{\\op{alg}}(r_{g_2}\\otimes \\psi)=\\lambda^{\\op{an}}(r_{g_2}\\otimes \\psi)$.\n\\end{Th}\n\n\\begin{proof}\nLet $\\ast\\in \\{\\op{an}, \\op{alg}\\}$, it follows from Proposition \\ref{boring prop} that the equalities \\[\\mu^{\\op{alg}}(r_{g_i}\\otimes \\psi)=\\mu^{\\op{an}}(r_{g_i}\\otimes \\psi)=0\\] and \\[\\lambda^{\\op{alg}}(r_{g_i}\\otimes \\psi)=\\lambda^{\\op{an}}(r_{g_i}\\otimes \\psi)\\] hold if and only if the relations \\[\\mu_{S_0}^{\\op{alg}}(r_{g_i}\\otimes \\psi)=\\mu_{S_0}^{\\op{an}}(r_{g_i}\\otimes \\psi)=0\\] and $\\lambda_{S_0}^{\\op{alg}}(r_{g_i}\\otimes \\psi)=\\lambda_{S_0}^{\\op{an}}(r_{g_i}\\otimes \\psi)$ hold. Therefore, by assumption, these relations hold for $i=1$, and we are to deduce them for $i=2$. Proposition \\ref{p-adic LFs congruent} asserts that there is a $p$-adic unit $u$ such that \n\\[L^{\\op{an}}_{S_0}(r_{g_1}\\otimes \\psi)\\equiv u L^{\\op{an}}_{S_0}(r_{g_2}\\otimes \\psi)\\mod{\\mathfrak{p}}.\\]\n\nFrom the above congruence, we find that \\[\\mu^{\\op{an}}_{S_0}(r_{g_1}\\otimes \\psi)=0\\Leftrightarrow \\mu^{\\op{an}}_{S_0}(r_{g_2}\\otimes \\psi)=0\\] and if these $\\mu$-invariants vanish, then, \\[\\lambda^{\\op{an}}_{S_0}(r_{g_1}\\otimes \\psi)= \\lambda^{\\op{an}}_{S_0}(r_{g_2}\\otimes \\psi).\\]On the other hand, the same assertion for $\\ast=\\op{an}$ replaced by $\\ast=\\op{alg}$ holds by Proposition \\ref{prop 3.10}. Therefore, the result follows.\n\\end{proof}\n\n\\section{A Criterion for the Vanishing of the $\\mu$-invariant}\\label{s 4}\n\\par Throughout, $f$ is a Hecke eigencuspform of weight $k\\geq 2$ and $r_f$ the symmetric square representation associated to $f$. In this section, we study the relationship between the fine Selmer group associated to the symmetric square representation, and the residual representation. We thus establish a criterion for the vanishing of the $\\mu$-invariant of $\\op{Sel}_{p^\\infty}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$ purely in terms of the residual representation $\\mathbf{A}_f[\\mathfrak{p}]$.\n\\par Let $L$ be a number field contained in $\\mathbb{Q}_S$. For any abelian group $N$ equipped with a continuous $\\op{Gal}(\\bar{\\mathbb{Q}}\/\\mathbb{Q})$-action, prime number $q$ and index $i=0,1,2$, set\n\\[K_q^i(N\/L):=\\bigoplus_{q'|q} H^i(L_{q'}, N).\\]Over the infinite extension $\\mathbb{Q}_{\\op{cyc}}$, set\n\\[K_q^i(N\/\\mathbb{Q}_{\\op{cyc}}):=\\varinjlim_L K_q^i(N\/L),\\]where the inductive limit is taken with respect to restriction maps over all number fields $L$ contained in $\\mathbb{Q}_{\\op{cyc}}$. The fine Selmer group is the kernel of the restriction map\n\\[\\op{R}\\left(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}}\\right):=\\op{ker}\\left\\{H^1\\left(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f\\right)\\longrightarrow \\bigoplus_{q\\in S} K_q^1(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}}) \\right\\}.\\]\nFor $i=0,1,2$, define the compact $\\op{G}$-modules \n\\[\\mathcal{Z}^i\\left(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}}\\right):=\\varprojlim_L H^i(\\mathbb{Q}_S\/L, \\textbf{T}_f)\\]where the projective limit is taken over corestriction maps as $L$ ranges over all number fields contained in $\\mathbb{Q}_{\\op{cyc}}$. The Poitou Tate sequence for $\\textbf{T}_f$ over $\\mathbb{Q}_{\\op{cyc}}$ breaks up into short exact sequences\n\\begin{equation}\\label{sesPT}\\begin{split}\n 0&\\rightarrow H^0(\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f)\\rightarrow \\bigoplus_{q\\in S_L} K_q^0(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow \\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})^{\\vee}\\\\\n & \\op{R}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow 0,\\\\\n 0&\\rightarrow \\op{R}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f)\\rightarrow \\bigoplus_{q\\in S} K_q^1(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\\\\n & \\mathcal{Z}^1(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})^{\\vee}\\rightarrow H^2(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f)\\rightarrow 0.\n\\end{split}\\end{equation}\nLet $\\op{Y}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$ be the Pontryagin dual for the fine Selmer group $\\op{R}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$.\n\\begin{Lemma}\\label{torsionconditions}\n\nThe following statements are equivalent\n\\begin{enumerate}\n \\item\\label{one} $\\op{Y}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$ is $\\Lambda$-torsion,\n \\item\\label{two} $\\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})$ is $\\Lambda$-torsion.\n\\end{enumerate}\n\\end{Lemma}\nFurthermore, if the above statements hold, then\\[\\mu\\left(\\op{Y}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0\\Leftrightarrow \\mu\\left(\\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0.\\]\n\\begin{proof}\nLet $U_q$ and $A_q$ denote the Pontryagin duals of $K_q^0(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$ and $H^0(\\mathbb{Q}_{\\infty,q},\\mathbf{A}_f)$ respectively. From $\\eqref{sesPT}$ we arrive at the exact sequence\n\\[0\\rightarrow Y(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow \\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow \\bigoplus_{q\\in S} U_q.\\] Since $q$ is finitely decomposed in the cyclotomic $\\mathbb{Z}_p$-extension $\\mathbb{Q}_{\\op{cyc}}$, it follows that $U_q$ is finitely generated as a $\\mathbb{Z}_p$-module and hence, it follows that $U_q$ is torsion as a $\\Lambda$-module. Therefore, $\\eqref{one}$ and $\\eqref{two}$ are equivalent. Since $U_q$ is finitely generated as a $\\mathbb{Z}_p$-module, it follows that \n\\[\\mu\\left(\\op{Y}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0\\Leftrightarrow \\mu\\left(\\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0.\\]\n\n\\end{proof}\n\n\\begin{Lemma}\\label{globalECLemma}Let $M$ be a finite dimensional $\\mathbb{F}_p$-vector space on which $\\op{G}_{\\mathbb{Q},S}$ acts. Then, we have the following relation\n\\[\\begin{split}&\\operatorname{corank}_{\\Omega}H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, M)-\\operatorname{corank}_{\\Omega}H^2(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, M)\\\\=&\\dim M-\\dim H^0(\\mathbb{R}, M).\\end{split}\\]\n\\end{Lemma}\n\\begin{proof}\n\\par It follows from \\cite[Proposition 1.6]{howson2002euler} that the $\\Omega$-corank of a module $N$ may be calculated via the following formula\n\\[\\operatorname{corank}_{\\Omega} N=\\sum_{j\\geq 0} (-1)^j \\dim H^j(\\Gamma, N).\\] Since $M$ is finite-dimensional vector space over $\\mathbb{F}_p$. We have that\n\\[\\begin{split}&\\sum_{i\\geq 0}(-1)^i \\operatorname{corank}_{\\Omega}H^{i}\\left(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},M\\right)\\\\=&\\sum_{i,j\\geq 0}(-1)^{i+j+1}\\dim H^j(\\Gamma, H^{i}(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},M))\\\\=&\\sum_{i\\geq 0}(-1)^{i+1}\\dim H^{i}(\\mathbb{Q}_S\/\\mathbb{Q},M)\n\\\\=&\\dim M-\\dim H^0(\\mathbb{R}, M).\\end{split}\\]\nThe last equality follows from the global Euler-characteristic formula.\n\\end{proof}\n\n\\begin{Lemma}\\label{localEClemma}\nLet $q$ be a finite prime and $M$ an $\\mathbb{F}_p[\\op{G}_q]$-module which is also finite dimensional as an $\\mathbb{F}_p$-vector-space. \nWe have that \n\\[\\operatorname{corank}_{\\Omega}K_q^1(M\/\\mathbb{Q}_{\\op{cyc}})=\\begin{cases}[K_q:\\mathbb{Q}_p]\\dim M\\text{ if }q|p\\\\\n0\\text{ if }q\\nmid p.\\end{cases}\\]\n\\end{Lemma}\n\\begin{proof}\nFor a choice of a prime $w|q$ of $\\mathbb{Q}_{\\op{cyc}}$, let $\\Gamma_w:=\\op{Gal}(\\mathbb{Q}_{\\infty,w}\/\\mathbb{Q}_q)$. By an argument similar to the proof of \\ref{globalECLemma}, we obtain the following relation\n\\[\\begin{split}&\\sum_{i\\geq 0}(-1)^{i+1}\\operatorname{corank}_{\\Omega(\\Gamma_w)}H^i(\\mathbb{Q}_{\\infty,w}, M)\\\\=&\\begin{cases}\\dim M& q|p\\\\\n0& q\\nmid p.\\end{cases}\\end{split}\\]\nSince $\\op{Gal}(\\bar{\\mathbb{Q}}_{\\infty,w}\/\\mathbb{Q}_{\\infty,w})$ has $p$-cohomological dimension $\\leq 1$, the cohomology groups $H^i(\\mathbb{Q}_{\\infty,w},M)=0$ for $i>1$. Since $q$ is finitely split in $\\mathbb{Q}_{\\op{cyc}}$, it follows that\n\\[\\operatorname{corank}_{\\Omega(\\Gamma_w)} H^0(\\mathbb{Q}_{\\infty,w},M)=0.\\] One deduces that \n\\[\\begin{split}&\\operatorname{corank}_{\\Omega(\\Gamma_w)}H^1(\\mathbb{Q}_{\\infty,w}, M)\\\\=&\\begin{cases}\\dim M& q|p\\\\\n0& q\\nmid p.\\end{cases}\\end{split}\\]\nOn the other hand, \n\\[K^1_q(M\/\\mathbb{Q}_{\\op{cyc}})^{\\vee}=\\op{Ind}_{\\Gamma_w}^{\\Gamma}\\left(H^1(\\mathbb{Q}_{\\infty,w}, M)^{\\vee}\\right)=\\Omega\\otimes_{\\Omega(\\Gamma_w)}H^1(\\mathbb{Q}_{\\infty,w}, M)^{\\vee} ,\\]\nand as a result, \n\\[\\operatorname{corank}_{\\Omega} K^1_q(M\/\\mathbb{Q}_{\\op{cyc}})=\\operatorname{corank}_{\\Omega(\\Gamma_w)}H^1(K_{\\infty,w}, M).\\]The assertion of the Lemma follows.\n\\end{proof}\n\nThe following is an easy consequence Lemmas \\ref{globalECLemma} and \\ref{localEClemma}.\n\\begin{Corollary}\\label{balancedCor}\nAssume that $f$ has good ordinary reduction at $p$. Then, we have the following relation\n\\[\\begin{split}&\\operatorname{corank}_{\\Omega}H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f[\\mathfrak{p}])-\\operatorname{corank}_{\\Omega}H^2(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f[\\mathfrak{p}])\\\\=&\\operatorname{corank}_{\\Omega}\\left(\\bigoplus_{q\\in S} K^1_q(\\mathbf{A}_f^-[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\right)\\\\=& 1.\\end{split}\\]\n\\end{Corollary}\n\n\\begin{Lemma}\\label{muzeroH2}\nAssume that the conditions of Lemma $\\ref{torsionconditions}$ are satisfied. Then the following are equivalent\n\\begin{enumerate}\n \\item $\\mu\\left(Y(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0$\n \\item $H^2(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f[\\mathfrak{p}])$ is a cotorsion $\\Omega$-module.\n\\end{enumerate}\n\\end{Lemma}\n\\begin{proof}\nLetting $E^{i,j}:=E^j\\left(H^i(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}]\\right)^{\\vee})$, the Iwasawa cohomology group \\[\\mathcal{Z}^{2}(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\] is related to the cohomology groups $H^i\\left(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}]\\right)^{\\vee}$ via Jannsen's spectral sequence\n\\[E_2^{i,j}\\Rightarrow \\mathcal{Z}^{i+j}(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}}).\\]The cohomology groups $H^i(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}])$ are cofinitely generated as $\\Omega$-modules. As a consequence, for $j>0$, $E^j\\left(H^i(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}])^{\\vee}\\right)$ is $\\Omega$-torsion. If condition (2) is satisfied, $E^{2,0}$ is $\\Omega$-torsion. From Jansen's spectral sequence, condition $(2)$ is equivalent to the assertion that $\\mathcal{Z}^2(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})$ is $\\Omega$-torsion. From\n\\[0\\rightarrow \\textbf{T}_f\\xrightarrow{p} \\textbf{T}_f\\rightarrow \\mathbf{A}_f[\\mathfrak{p}]\\rightarrow 0,\\] we obtain the following\n\\[\\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})\\xrightarrow{p}\\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow \\mathcal{Z}^2(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow \\mathcal{Z}^3(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})=0.\\]As a result,\n\\[\\mathcal{Z}^2(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\simeq \\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})\/p\\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})\\]is $\\Omega$-torsion if and only if the $\\mu$ invariant of $\\mathcal{Z}^2(\\textbf{T}_f\/\\mathbb{Q}_{\\op{cyc}})$ is zero. Therefore, condition (2) is equivalent to condition (1).\n\\end{proof}\nThe $\\mathfrak{p}^n$-Selmer group is \n\\[\\op{Sel}_{p^\\infty} (\\mathbf{A}_f[\\mathfrak{p}^n]\/\\mathbb{Q}_{\\op{cyc}}):=\\ker\\left\\{H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f[\\mathfrak{p}^n])\\rightarrow \\bigoplus_{w\\in S(\\mathbb{Q}_{\\op{cyc}})} H^1( \\mathbb{Q}_{\\infty,w}, D_w[\\mathfrak{p}^n])\\right\\},\\] where $D_w=\\mathbf{A}_f$ (resp. $D_w=\\mathbf{A}_f^-$) if $w\\nmid p$ (resp. $w\\mid p$).\nSet \\[\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}^n]\/\\mathbb{Q}_{\\op{cyc}}):=\\varprojlim_L \\operatorname{Sel}(\\mathbf{A}_f[\\mathfrak{p}^n]\/L),\\]where the projective limit is taken over number fields $L\\subset \\mathbb{Q}_{\\op{cyc}}$ with respect to corestriction maps.\n\n\\begin{Lemma}\\label{Sstarinjection}\nWe have an injection \\[\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\hookrightarrow \\operatorname{Hom}_{\\Omega}(\\op{Sel}_{p^\\infty} (\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})^{\\vee},\\Omega).\\]\n\\end{Lemma}\n\\begin{proof}\nThe proof is identical to that of \\cite[Lemma 5.5]{lim2018fine}.\n\\end{proof}\n\\begin{Lemma}\\label{muequalszeroOmega}\nThe following conditions are equivalent\n\\begin{enumerate}\n \\item $\\mu\\left(\\op{Sel}_{p^\\infty}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0$,\n \\item $\\operatorname{Sel}(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})$ is $\\Omega$-cotorsion.\n\\end{enumerate}\n\\end{Lemma}\n\\begin{proof}\nStandard arguments show that the kernel and cokernel of the natural map\n\\[\\operatorname{Sel}(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow \\operatorname{Sel}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})[\\mathfrak{p}]\\] are finite, and the result follows from this.\n\\end{proof}\n\\begin{Th}\\label{muzeroconditions}\nThe following statements are equivalent:\n\\begin{enumerate}\n \\item $\\mu\\left(\\op{Sel}_{p^\\infty}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0$,\n \\item the group $H^2(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}])$ is $\\Omega$-torsion and there is a short exact sequence\n \\[0\\rightarrow \\op{Sel}_{p^\\infty}(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}])\\rightarrow \\bigoplus_{w\\in S(\\mathbb{Q}_{\\op{cyc}})} H^1(K_{\\infty,w}, D_w[\\mathfrak{p}])\\rightarrow 0.\\]\n \\item The group $H^2(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}])$ is $\\Omega$-torsion and $\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})=0$.\n\\end{enumerate}\n\\end{Th}\n\\begin{proof}\nWe begin by showing that conditions $(2)$ and $(3)$ are equivalent. Assume condition $(2)$. Then from the Poitou-Tate sequence, $\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})^{\\vee}$ injects into $H^2(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}])$ and hence according to Lemma \\ref{muzeroH2}, is $\\Omega$-torsion. It follows from Lemma $\\ref{Sstarinjection}$ that $\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})^{\\vee}=0$. Condition $(3)$ therefore follows from $(2)$. On the other hand, condition $(2)$ is a direct consequence of condition $(3)$ and the Poitou-Tate sequence.\n\\par In order to complete the proof, it suffices to show that conditions $(1)$ and $(3)$ are equivalent. Suppose that condition $(1)$ holds. Then being a quotient of $X(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$ the dual fine Selmer group $Y(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$ also has zero $\\mu$-invariant. By Lemma $\\ref{muzeroH2}$, it follows that $H^2(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}])$ is $\\Omega$-torsion. From the Poitou-Tate sequence, \n\\[\\begin{split}&\\operatorname{corank}_{\\Omega}\\operatorname{Sel}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})+\\operatorname{corank}_{\\Omega}\\left(\\bigoplus_{q\\in S} K_q^1(D_q[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})\\right)\\\\=&\\operatorname{corank}_{\\Omega} H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}, \\mathbf{A}_f[\\mathfrak{p}])+\\operatorname{corank}_{\\Omega}\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}}).\\end{split}\\]It follows from Corollory $\\ref{balancedCor}$ that \n\\[\\operatorname{corank}_{\\Omega} H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}}), \\mathbf{A}_f[\\mathfrak{p}])=\\operatorname{corank}_{\\Omega}\\left(\\bigoplus_{q\\in S} K_q^1(\\mathcal{S}(D_q)_p\/\\mathbb{Q}_{\\op{cyc}})\\right)\\] and as a result, \n\\[\\operatorname{corank}_{\\Omega}\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})=\\operatorname{corank}_{\\Omega}\\operatorname{Sel}(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}}).\\] Since $\\mu\\left(\\operatorname{Sel}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0$ it follows from Lemma $\\ref{muequalszeroOmega}$ that $\\operatorname{Sel}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$ cotorsion over $\\Omega$. It follows that $\\operatorname{Sel}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})$ is $\\Omega$-cotorsion and as a result, $\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})$ is $\\Omega$-cotorsion. It follows from Lemma $\\ref{Sstarinjection}$ that $\\operatorname{S}^*(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})=0$. Therefore, Poitou-Tate gives rise to a short exact sequence\n \\[0\\rightarrow \\op{Sel}_{p^\\infty}(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\rightarrow H^1(\\mathbb{Q}_S\/\\mathbb{Q}_{\\op{cyc}},\\mathbf{A}_f[\\mathfrak{p}])\\rightarrow \\bigoplus_{w\\in S(\\mathbb{Q}_{\\op{cyc}})} H^1(\\mathbb{Q}_{\\infty,w}, D_w[\\mathfrak{p}])\\rightarrow 0.\\]\n On the other hand, if condition (2) is satisfied, it from Corollory $\\ref{balancedCor}$ that $\\operatorname{Sel}(\\mathbf{A}_f[\\mathfrak{p}]\/\\mathbb{Q}_{\\op{cyc}})$ is $\\Omega$-cotorsion. It follows from Lemma $\\ref{muequalszeroOmega}$ that \\[\\mu\\left(X(\\mathbf{A}_f\/\\mathbb{Q}_{\\op{cyc}})\\right)=0.\\] This completes the proof of the Theorem.\n\\end{proof}\n\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\citet{gunn65} predicted that Ly$\\alpha$ absorption would give rise to a sudden drop of \ncontinuum flux at wavelengths shorter than 1216 $\\AA$ if a tiny amount of neutral hydrogen is present along the line of sight. \nThe dramatic clearing of the Gunn-Peterson trough \nfrom the observation of quasars at $z\\sim6$ demonstrates \nthat hydrogen in the Universe is highly ionized at $z\\lesssim6$ \\citep{becker01,fan01,fan06}. \nPolarization signals from the comic microwave background (CMB) also suggest that \na large fraction of hydrogen may already be ionized by $z \\sim 10-12$ \\citep{komatsu11,planck-collaboration13}.\nYet, the detailed processes on how reionization has occurred remain unclear.\n\nIn the standard $\\Lambda$CDM universe, dwarf galaxies form early \\citep[e.g.,][]{somerville03}\nand could dominate the budget of hydrogen ionizing photons at the epoch of reionization. \nPhotons that escape from the porous interstellar medium \\citep[ISM,][]{clarke02}, \ndriven by supernova (SN) explosions \\citep{mckee77}, \nto the intergalactic medium (IGM) create \\mbox{{\\sc H ii}}\\ bubbles, which expand as more stars form.\nThe eventual percolation of \\mbox{{\\sc H ii}}\\ bubbles would mark the end of the cosmological reionization\n\\citep[e.g.,][]{gnedin00,mcquinn07,shin08}. \nThis stellar reionization scenario has been studied extensively, both (semi-) analytically \\citep[e.g.][]{madau99,miralda-escude00,barkana01,bianchi01,cen03,wyithe03,somerville03,bolton07,wyithe07,kuhlen12,robertson13} and \nnumerically \\citep[e.g.][]{gnedin00,razoumov02,ciardi03,fujita03,trac07,gnedin08,wise09,razoumov10,yajima11,Paardekooper13}.\nIt appears that dwarf galaxies are the most plausible source of the ionizing photons,\nprovided that the escape fraction is significant ($\\mbox{$f_{\\rm esc}$} >10 \\%$).\nActive galactic nuclei also contribute to ionizing photons in both the ultraviolet (UV) and X-ray bands but\nare generally believed to be sub-dominant to stellar sources \n\\citep{haehnelt01,wyithe03,schirber03,faucher-giguere08a,cowie09,willott10,fontanot14}.\nThe strong accretion shock present in massive halos ($\\mbox{${M}_{\\rm vir}$} \\lower.5ex\\hbox{$\\; \\buildrel > \\over \\sim \\;$} 10^{10.5}\\, \\mbox{${M}_\\odot$}$) \nmay also produce a non-negligible amount of hydrogen ionizing photons in the vicinity of the galactic gaseous disk \\citep{dopita11}.\n\nThe major uncertainty in the dwarf galaxy-driven reionization picture is the escape fraction of ionizing photons. \nObservationally, this is difficult to probe, because the hydrogen ionizing photons escaping from \ndwarf galaxies will get easily absorbed by the IGM during reionization ($z\\gtrsim7$).\nBesides, it requires a large sample of galaxies to obtain a statistically significant estimate of the\nescape fraction ($f_{\\rm esc}$). Nevertheless, it is worth noting that galaxies at higher \nredshift often exhibit a larger relative escape fraction ($f_{\\rm esc}^{\\rm rel}$), which is defined as the ratio of \nthe escape fraction at 900$\\AA$ and 1500$\\AA$, than their low-$z$ counterparts \\citep{siana10}. \nObservations of star-forming galaxies at $z\\lesssim1$ indicate that the relative escape fraction is only \na few percent \\citep{leitherer95,deharveng01,malkan03,siana07,cowie09,bridge10,siana10}. \nThe only exception reported so far is Haro 11, which shows $f_{\\rm esc}\\sim 4-10\\%$ \\citep{bergvall06}.\nOn the other hand, a non-negligible fraction ($\\sim10\\%$) of star-forming galaxies at $z\\sim3$ reveals \na high escape of $f_{\\rm esc}^{\\rm rel} \\ge 0.5$ \\citep{shapley06,iwata09,nestor11,nestor13,cooke14}.\nFor typical Lyman break galaxies at $z\\sim3$ in which 20--25\\% of UV photons are escaping \\citep{reddy08},\nthe relative fraction corresponds to a high escape fraction of $\\mbox{$f_{\\rm esc}$}\\sim0.1$.\nGiven that galaxies are more actively star forming at high redshift \\citep[e.g.][]{bouwens12a,dunlop13},\nit has been suggested that there may be a correlation between star formation rate and \\mbox{$f_{\\rm esc}$}, \nand possibly evolving \\mbox{$f_{\\rm esc}$}\\ with redshift \\citep[][]{kuhlen12}. \n\nPredicting the escape fraction in theory is also a very challenging task.\nThis is essentially because there is little understanding on the structure of the ISM at high-$z$ dwarf galaxies. \nNumerical simulations are perhaps the most suited to investigate this subject, \nbut different subgrid prescriptions and\/or finite resolution often lead to different conclusions. \nUsing an adaptive mesh refinement (AMR) code, ART \\citep{kravtsov97}, with SN-driven energy \nfeedback, \\citet{gnedin08} claim that the angle-averaged escape fraction increases with galaxy mass \nfrom $10^{-5}$ to a few percents in the range $10^{10} \\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} M_{\\rm gal} \\le 4\\times10^{11}$.\nThey attributed this trend to the fact that more massive galaxies have smaller gas-to-stellar scale-height than \nlower mass galaxies in their simulations. On the other hand, \\citet{razoumov10} argue based on cosmological \nTreeSPH simulations \\citep{sommer-larsen03} that more than 60\\% of the \nhydrogen ionizing photons escape from dwarf galaxies in dark matter halos of $M_{\\rm halo}=10^8-10^9\\mbox{${M}_\\odot$}$. \nMore massive halos of $10^{11}\\mbox{${M}_\\odot$}$ are predicted to have a considerably smaller \\mbox{$f_{\\rm esc}$}\\ ($\\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} 10\\%$). \nA similar conclusion is reached by \\citet{yajima11}. It should be noted, however, that resolution could \npotentially be an issue in these two studies in the sense that their resolution of a few hundreds to \nthousands of parsec is unable to resolve most star-forming regions and hence capture obscuring \ncolumn densities and a porous ISM. \\citet{wise09} performed cosmological radiation hydrodynamic \nsimulations employing very high resolution (0.1 pc), and found that the neutral hydrogen column \ndensity varies over the solid angles from $N_{\\rm HI}\\sim 10^{16}\\, {\\rm cm^{-2}}$ \nto $10^{22}\\, {\\rm cm^{-2}}$ with the aid of SN explosions and photo-ionization.\nBecause of the porous ISM, a high \\mbox{$f_{\\rm esc}$}\\ of $\\sim40\\%$ is achieved \nin small halos of $M_{\\rm halo}=10^{7} - 10^{9.5} \\mbox{${M}_\\odot$}$. \n\\citet{wise14} show that an even higher fraction ($\\sim 50\\%$) of hydrogen \nionizing photons escapes from minihalos of $M_{\\rm halo}=10^{6.25} - 10^{7} \\mbox{${M}_\\odot$}$.\n\n\nAnother potentially important source of ionizing radiation is runaway OB stars that are dynamically \ndisplaced from their birthplace. The runaway OB stars are normally defined by their peculiar motion \n\\citep[$v_{\\rm pec} \\ge 30\\, {\\rm km\\,s^{-1}}$,][]{blaauw61}, and roughly $30\\%$ of OB stars are \nclassified as runaways in the Milky Way \\citep{stone91,hoogerwerf01,tetzlaff11}.\nAlthough the fraction is still uncertain, their peculiar speed of $\\left\\sim 40\\,{\\rm km\\,s^{-1}}$ means \nthat the runaway OB stars can, in principle, travel away from the birthplace by $\\sim$200 pc in 5 Myrs,\nmaking them an attractive source for the ionizing photons. \nThe runaway OB stars are thought to originate from a three-body interaction with other stars in a young \ncluster \\citep{leonard88}, and\/or from a SN explosion of a companion in a binary system \\citep{blaauw61}.\n\\citet{conroy12} evaluated the impact of the inclusion of runaway OB stars on \\mbox{$f_{\\rm esc}$}\\ using a simple analytic argument, \nand concluded that the runaway OB stars may enhance \\mbox{$f_{\\rm esc}$}\\ by a factor of up to $\\sim4.5$ in halos with \n$M_{\\rm halo}=10^8-10^9\\mbox{${M}_\\odot$}$. \n\nThe aim of this study is to investigate the importance of the aforementioned two processes \nby measuring the escape fraction from high-resolution cosmological radiation hydrodynamics simulations. \nFirst, given that modeling the SN explosion as thermal energy\n is well known to have the artificial radiative cooling problem \\citep[e.g.][]{katz92,slyz05}, \nwe expect that the role of the SN is likely to be underestimated in some cosmological simulations \\citep[e.g.][]{gnedin08}. \nWith a new physically based SN feedback model that captures all stages of the Sedov explosion from \nthe free expansion to the snowplow phase, we study the connection between the escape of ionizing photons and feedback processes \nin dwarf galaxies. Second, we extend the idea by \\citet{conroy12}, and quantify \nthe impact from the runaway OB stars on reionization in a more realistic environment.\n\nWe first describe the details of our cosmological radiation hydrodynamics simulations \nincluding the implementation of runaway OB stars in Section~2. \nWe present the feedback-regulated evolution of the escape fraction and the impact of the inclusion \nof runaway OB stars in Section~3. We summarize and discuss our findings in Section~4. \nOur new mechanical feedback from SN explosions is detailed in Appendix.\n\n\n\n\n\\section{Method}\n\n\\subsection{Hydrodynamics code}\nWe make use of the Eulerian adaptive mesh refinement code, {\\sc ramses} \\citep[][ver. 3.07]{teyssier02}, to investigate \nthe escape of ionizing radiation from high-$z$ galaxies. \n{\\sc ramses} is based on the fully threaded oct-tree structure \\citep{khokhlov98}, \nand uses the second-order Godunov scheme to solve Euler equations.\nThe hydrodynamic states reconstructed at the cell interface are limited using the MinMod method,\nand then advanced using the Harten-Lax-van Leer contact wave Riemann solver \\citep[HLLC,][]{toro94}.\nWe adopt a typical Courant number of 0.8. The poisson equation is solved using the adaptive particle-mesh method.\nGas can effectively cool down to $10^4$ K by atomic and metal cooling \\citep{sutherland93}.\nBelow $10^4$ K, metal fine-structure transitions, such as {\\sc [CII]} 158$\\mu m$, can further lower \nthe temperature down to 10 K, as in \\citet{rosen95}. We set the initial metallicity to $2\\times10^{-5}$, \nas primordial SNe can quickly enrich metals in mini-halos of mass $10^7\\,\\mbox{${M}_\\odot$}$ \\citep[e.g.,][]{whalen08}, \nwhich our simulations cannot resolve properly.\n\nWe use the multi-group radiative transfer (RT) module developed by \\citet{rosdahl13} \nto compute the photoionization by stars. \nThe module solves the moment equations for three photon packets ({\\sc Hii}, {\\sc Heii}, and {\\sc Heiii} ionizing photons) \nusing a first-order Godunov method with M1 closure for the Eddington tensor. \nWe adopt the Harten-Lax-van Leer \\citep[HLL,][]{harten83} intercell flux function. \nIonizing photons from each star are taken into consideration in every fine step.\nNote that an advantage of the moment-based RT is that it is not limited by the number of sources.\nThe production rate of the ionizing photon varies with time for a given initial mass function \\citep[IMF,][see also \\citealt{rosdahl13}]{leitherer99}. The majority of the ionizing photons are released in $\\sim$ 5 Myr of stellar age.\nWe adopt the production rate equivalent to that of Kroupa IMF \\citep{kroupa01} \nfrom the {\\sc Starburst99} library \\citep{leitherer99}\\footnote{Note that we use the Chabrier IMF to \nestimate the frequency of SN explosions. We choose the number of ionizing photons equivalent to that of the Kroupa IMF, \nbecause the models with the Chabrier IMF is not yet available in the {\\sc Starburst99} \\citep{leitherer99}}.\nThe radiation is coupled with gas via photo-ionization and photo-heating,\nand a set of non-equilibrium chemistry equations for {\\sc Hii}, {\\sc Heii}, and {\\sc Heiii} \nare solved similarly as in \\citet{anninos97}. We assume that photons emitted by recombination are \nimmediately absorbed by nearby atoms (case B).\nThe speed of light is reduced for the speed-up of the simulations by 0.01 \\citep[e.g.][]{gnedin01}.\nThis is justifiable because we are mainly interested in {\\it the flux} of escaping photons at the virial sphere.\n\n\n\n\\begin{table}\n \\caption{Summary of cosmological simulations}\n \\label{table1}\n \\centering\n \\begin{tabular}{@{}cccccccc} \n \\hline\n \\hline\nModel & SNII & RT & Run- & $\\Delta x_{\\rm min}$ & ${m_{\\rm star,min}}$ & $m_{\\rm dm}$ \\\\\n & & & aways & [pc] & [$\\mbox{${M}_\\odot$}$] & [$10^5\\,\\mbox{${M}_\\odot$}$] \\\\\n\\hline\nFR & $\\checkmark$ & $\\checkmark$ & -- & 4.2 & 49 & 1.6 \\\\\nFRU &$\\checkmark$ & $\\checkmark$ & $\\checkmark$ & 4.2 & 49 & 1.6 \\\\\n\\hline\n \\end{tabular}\n\\end{table}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7.5cm]{fig1.eps}\n \\caption{Dark matter halo mass function from the zoomed-in region of the $\\textsf{FR}$\\ run at $z=7$.\n Comparison with \\citet{jenkins01} mass function at the same epoch indicates that our simulated volume \n represents the average region of the universe. }\n \\label{fig:mf}\n\\end{figure}\n\n\n\\subsection{Cosmological Simulations}\n\nWe carry out cosmological simulations to investigate \nthe escape fraction in realistic environments. For this purpose, we generate the initial condition \nusing the {\\sc music} software \\citep{hahn11}, with the WMAP7 cosmological parameters \\citep{komatsu11}:\n$(\\Omega_{\\rm m}, \\Omega_{\\Lambda}, \\Omega_{\\rm b}, h, \\sigma_8, n_s = 0.272, 0.728, 0.045, 0.702, 0.82, 0.96)$.\nA large volume of $(25\\,{\\rm Mpc} \\, h^{-1})^3$ is employed to include the effect of the large-scale tidal field. \nTo achieve high mass resolution, we first run dark matter-only simulations with 256$^3$ particles, \nand identify a rectangular region of $3.8\\times4.8\\times9.6$ Mpc (comoving)\nthat encloses two dark matter halos of $\\simeq 1.5\\times 10^{11} \\mbox{${M}_\\odot$}$ at $z=3$.\nThen, we further refine the mass distribution of the zoomed-in region, such that the mass of a dark matter particle \nis $m_{\\rm dm}=1.6\\times10^5\\,\\mbox{${M}_\\odot$}$, which corresponds to 2048$^3$ particles in effect.\nDespite that we purposely select the region in which two massive dark matter halos are present at $z=3$,\na comparison with the number of dark matter halos per volume predicted by \\citet{jenkins01} shows that \nour simulated box represents an average region of the universe at $z=7$ (Figure~\\ref{fig:mf}). \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.6cm]{fig2.eps}\n \\caption{Expansion of the \\mbox{{\\sc H ii}}\\ bubble in a cosmological simulation ($\\textsf{FR}$). Three panels show the evolution of \n the density-weighted fraction of ionized hydrogen of the zoomed-in region. The horizontal size of the figure is \n 9.5 Mpc (comoving).}\n \\label{fig:hii}\n\\end{figure}\n\n\nThe level of the root grid in the zoomed-in region is 11, consistent with the dark matter resolution.\nFurther 12 levels of refinement are triggered if the dark matter plus baryon mass in a cell exceeds \n8 times the mass of a dark matter particle. We keep the minimum physical size of a cell to \n $\\Delta x_{\\rm min}=25\\,{\\rm Mpc} \\, h^{-1}\/ 2^{23} = 4.2\\,{\\rm pc}$ over the entire redshift. However, this refinement \ncriterion is not optimized to resolve the structure of the ISM, unless extremely high mass resolution is adopted. \nFor example, for a gas cell of $n_{\\rm H}=10\\, {\\rm cm^{-3}}$, the criterion will come into play only if the size of \nthe cell is larger than $\\sim$ 160 pc. In order to better resolve the structure of the ISM, \nwe enforce a cell with $n_{\\rm H}\\ge 1\\, {\\rm cm^{-3}}$ to be resolved on $8 \\Delta x_{\\rm min}=34\\,{\\rm pc}$.\nIn a similar context, we apply more aggressive refinement criterion for the star-forming gas \nin such a way that gas with $n_{\\rm H}=100\\, {\\rm cm^{-3}}$ ($800\\,{\\rm cm^{-3}}$) is always \nresolved on a 8.5 pc (4.2 pc) cell.\nWe adopt very high stellar mass resolution of $\\approx 49\\,\\mbox{${M}_\\odot$}$.\nThis means that a star particle with the minimum mass will produce a single SN event for the Chabrier IMF.\n\n\n\nWe run two sets of cosmological simulations, $\\textsf{FR}$\\ and $\\textsf{FRU}$, with the identical initial condition down to $z=7$.\nBoth runs include star formation, metallicity-dependent radiative cooling \\citep{sutherland93,rosen95}, \nthermal stellar winds, mechanical feedback from SN explosions, and photoionization by stellar radiation.\nThe runaway OB stars are included only in the $\\textsf{FRU}$\\ run. In Figure~\\ref{fig:hii}, we show an example of the \ngrowth of \\mbox{{\\sc H ii}}\\ bubbles in the $\\textsf{FR}$\\ run. Our simulated region is nearly ionized at $z=7$.\n\n\n\nDark matter (sub) halos are identified using the {\\sc Amiga} halo finder \\citep[{\\sc Ahf},][]{gill04,knollmann09}.\n{\\sc Ahf} first constructs the adaptive meshes based on the particle distribution, finds the density minima,\nand determines physical quantities based on a virial overdensity ($\\Delta_{\\rm vir}$).\nGravitationally unbound particles are removed iteratively if they move faster than the local escape velocity \nduring this procedure. The virial radius is defined such that \nthe mass enclosed within the virial sphere is the virial overdensity times the critical density of the universe times the volume, \ni.e. $\\mbox{${M}_{\\rm vir}$}(z) = \\Delta_{\\rm vir}(z) \\rho_{\\rm crit}(z) 4 \\pi r_{\\rm vir}^3 \/ 3$.\nWe take $\\Delta_{\\rm vir}=177$, appropriate for a $\\Lambda$-dominated universe at $z>6$ \\citep{bryan98}.\nThis results in 796, 443, and 183 dark matter halos of mass $\\mbox{${M}_{\\rm vir}$}\\ge10^{8}\\,\\mbox{${M}_\\odot$}$ immune to the contamination \nby coarse dark matter particles ($m_{\\rm dm} > 1.6\\times10^{5}\\,\\mbox{${M}_\\odot$}$) at $z=7$, 9, and 11, respectively. \n\n\n\n\\subsection{Star Formation and Feedback}\n\nStars form in a very dense, compact molecular core. \nInfrared extinction maps of nearby interstellar cores indicate that their size ranges from 0.01 to 0.4 pc\n\\citep[e.g.][]{alves07,konyves10}, which is difficult to resolve in current cosmological simulations.\nNevertheless, studies of gravitational collapse in converging flows \\citep{gong11} seem to suggest that \na gravitationally bound cloud is likely to experience runaway collapse no matter how the collapse is initiated. \nIn a similar spirit, we assume that stars would form in a cell if the following conditions are met simultaneously\n\\citep[e.g.][]{cen92}:\n\\begin{itemize}\n\\itemsep0em\n\\item[1.] the flow is convergent ($\\vec{\\nabla}\\cdot \\rho {\\vec v} <0$) ,\n\\item[2.] the cooling time is shorter than the dynamical time, \n\\item[3.] the gas is Jeans unstable, and\n\\item[4.] the number density of hydrogen exceeds the threshold density $n_{\\rm th}={\\rm 100 \\,cm^{-3}}$.\n\\end{itemize}\nThe last condition is motivated by the density of a Larson-Penston profile \\citep{larson69,penston69} at $0.5\\Delta x$,\n $\\rho_{\\rm LP}\\approx8.86 c_s^2 \/ \\pi\\,G\\,\\Delta x^2$, where $c_s$ is the sound speed and $\\Delta x$ is \n the size of the most refined cell. \n Star particles are created based on the Schmidt law \\citep[][]{schmidt59}, \n $ \\dot{\\rho}_{\\star} = \\epsilon_{\\rm ff} \\, \\rho_{\\rm gas} \\, \/ \\, t_{\\rm ff} $, assuming that 2\\% of the star-forming \n gas ($\\epsilon_{\\rm ff}$) is converted into stars per its free-fall time ($t_{\\rm ff}$) \\citep{krumholz07,kennicutt98}. \n The mass of each star particle is determined as $m_\\star=\\alpha\\, N_p \\rho_{\\rm th} \\, \\Delta x_{\\rm min}^3 $, \n where $\\rho_{\\rm th}$ is the threshold density for star formation, \n$\\Delta x_{\\rm min}$ is the size of the most refined cell, and $\\alpha$ is a parameter that \ncontrols the minimum mass of a star particle. $N_p$ is the number of star particles to be formed in a cell,\nwhich is drawn from a Poisson random distribution, $P(N_p) = (\\lambda ^{N_p} \/ N_p! ) \\exp\\left(-\\lambda\\right)$.\nHere the Poissonian mean ($\\lambda$) is computed as \n$\\lambda \\equiv \\epsilon_{\\rm ff} \\left({\\rho\\Delta x^3}\/{m_{\\rm \\star,min}}\\right) \\left( {\\Delta t_{\\rm sim}}\/{t_{\\rm ff}}\\right), $\nwhere $\\Delta t_{\\rm sim}$ is the simulation time step, and $m_{\\rm \\star,min}$ is the minimum stellar mass (i.e. $N_p=1$).\n\nWe describe the SN feedback using a new physical model which captures \nthe SN explosion at all stages from the early free expansion to the final momentum-conserving snowplow phase.\nBriefly, we deposit radial momentum to the cells affected by supernova feedback, conserving energy appropriately.\nThe amount of input momentum is determined by the stage the blast wave is in, which in turn is dependent upon the \nphysical condition (density and metallicity) of the gas being swept up and simulation resolution. \nThe virtue of our scheme is that an approximately (within 20\\%) correct amount of momentum is imparted to the \nsurrounding gas regardless of the resolution. Thus, this prescription should be useful to cosmological simulations,\nespecially those with finite resolution that potentially suffer from the artificial radiative cooling. \nThe details of our implementation and a simple test are included in the Appendix.\n\nThe frequency of a SN per solar mass is estimated assuming the Chabrier IMF \\citep{chabrier03}.\nFor the simple stellar population with a low- (high-) mass cut-off of 0.1 (100) \\mbox{${M}_\\odot$}, \nthe total mass fraction between 8 to 100 \\mbox{${M}_\\odot$}\\ is 0.317, and the mean SN progenitor mass is 15.2 \\mbox{${M}_\\odot$}\\ on the zero-age main sequence.\nAt the time of the explosion, we also deposit newly processed metals into the surrounding. \nThe mass fraction of newly synthesized metals in stellar ejecta is taken to be 0.05 following \\citet{arnett96}.\nA star particle is assumed to undergo the SN phase after the main sequence lifetime of the mean SN progenitor \\citep[10 Myr,][]{schaller92}. As discussed in \\citet{slyz05}, allowing for the delay between the star formation and explosion \n(i.e. stellar lifetimes) is crucial to the formation of hot bubble in the ISM. \nWe find that the physically based SN feedback employed in this study drives stronger galactic winds \nthan the runs with thermal feedback or kinetic feedback that are valid only under certain conditions\n \\citep[][see below]{dubois08}. \nStellar winds from massive stars are modeled as thermal input, based on \\citet{leitherer99}.\n\n\n\n\n\\subsection{Runaway OB Stars}\n\nOur implementation of runaway OB stars is largely motivated by \\citet{tetzlaff11},\nwho compiled candidates of runaway stars younger than 50 Myr for the 7663 {\\it Hipparcos} sample. \nBy correcting the solar motion and Galactic rotation, they found that the peculiar space velocity of the stars \nmay be decomposed into two Maxwellian distributions intersecting at 28 ${\\rm km\\,s^{-1}}$.\nAssuming that each Maxwellian distribution represents a kinematically distinctive population, \nthey estimated the fraction of the runaways to be $\\sim 27.7\\%\\pm 1.9$ for the sample \nwith full kinematic information. The dispersion of the Maxwellian distribution is \nmeasured as 24.4 ${\\rm km\\, s^{-1}}$ for the high-velocity group.\n\nSince either runaway OB stars formed through the explosion of a SN in a binary or \nthose dynamically ejected in a cluster are not resolved in our simulations, \nwe crudely approximate this by splitting a star particle into a normal (70\\% in mass) \nand a runaway particle (30 \\%) at the time of star formation. While the initial velocity of the normal star is chosen \nas the velocity of the birth cloud, we add a velocity drawn from the Maxwellian distribution \non top of the motion of the birth cloud for runaway particles. To do so, we generate the distribution following \nthe Maxwellian with the dispersion of $\\sigma_v = 24.4\\,{\\rm km\\,s^{-1}}$ and the minimum space \nvelocity of $v_{\\rm 3D}=28 \\,{\\rm km\\,s^{-1}}$ using the rejection method \\citep{press92}. The direction of the \nrunaway motion is chosen randomly for simplicity. A similar approach is taken by \\citet{ceverino09} to \nstudy the formation of disk galaxies in a cosmological context.\n\n\n\n\n\n\\subsection{Estimation of Escape Fraction}\n\nThe fraction of escaping ionizing photons ($f_{\\rm esc}$) is measured by comparing the photon flux at the virial radius \nand the photon production rate from young massive stars.\nSince the speed of light is finite, there is a small delay in time between the photons produced by the stars and \nthe photons escaping at the virial sphere. In order to take this into account, we use the photon production rate at earlier time ($t-r_{\\rm vir}\/c'$), \nwhere $c'$ is the reduced speed of light used in the simulations. The escape fraction is then computed as\n\\begin{equation}\nf_{\\rm esc}(t) \\equiv \\frac{\\int d\\Omega\\, \\vec{F}_{\\rm ion}(t) \\cdot \\hat{r} ~\\Theta(\\vec{F}_{\\rm ion}\\cdot \\hat{r})}{\\int dm_* \\, \\dot{N}_{\\rm ion} (t-r_{\\rm vir}\/c')},\n\\label{eq:fesc}\n\\end{equation}\nwhere $\\vec{F}_{\\rm ion}$ is the ionizing photon flux, $d\\Omega$ is the solid angle, $m_*$ is the mass of each star particle,\n$\\dot{N}_{\\rm ion}(t)$ is the photon production rate of a simple stellar population of age $t$ per solar mass, \nand $\\Theta$ is the Heaviside step function. Here, we approximate the delay time to be a constant, $r_{\\rm vir}\/c'$, \nfor each halo assuming that the central source is point-like. \nSince only outflowing photons are considered in Equation~\\ref{eq:fesc},\nwe find that a minor fraction ($\\sim 5\\%$) of galaxies exhibit $f_{\\rm esc}$ greater than 1.\nThis happens mostly when there is little absorbers left in the halo after disruptive SN explosions.\nIn this case, we randomly assign $f_{\\rm esc}$ between 0.9 and 1.0.\nWe confirm that the photon production rate-averaged escape fraction, which is the most important quantity in this study, \nis little affected by this choice even if the net flux is used, and thus we decide to take a simpler method. \n\nDust can also affect the determination of the escape of the hydrogen ionizing photons. \nHowever, given that our simulated galaxies are very \nmetal-poor ($0.002-0.05\\,Z_{\\odot}$) and galaxies with lower metallicity have a progressively lower amount of \ndust \\citep{lisenfeld98,engelbracht08,galametz11,fisher13}, it is unlikely that dust decreases the escape \nfraction substantially. Thus, we neglect the absorption of hydrogen ionizing photons by dust in this study. \n\n\n\n\n\n\\section{Results}\n\\subsection{Feedback-regulated Escape of Ionizing Photons}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.5cm]{fig3.eps}\n \\caption{The baryon-to-star conversion efficiency at $z=7$ from the $\\textsf{FR}$\\ (blue) and the $\\textsf{FRU}$\\ (orange) runs. \n Only central galaxies are shown. The cosmic mean ($\\Omega_{\\rm b}\/\\Omega_{\\rm m}=0.165$) is \n shown as a black solid line. Also included as a star is the stellar fraction measured from \n the NutFB simulation \\citep{kimm11b}.\n Our mechanical feedback from SN explosions is more effective \n at regulating star formation, compared with previous studies injecting thermal or kinetic energy (see the text).\n }\n \\label{fig:mstar}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=17cm]{fig4_1.eps}\n \\includegraphics[width=18cm]{fig4_2.eps}\n \\includegraphics[width=17cm]{fig4_3.eps}\n \\caption{Evolution of the escape fraction (\\mbox{$f_{\\rm esc}$}) and specific star formation rate (sSFR) in two massive \n halos from the $\\textsf{FR}$\\ run. Black solid lines in the top and bottom panels indicate the escape fraction measured \n at the virial radius at each snapshot as a function of the age of the universe. We denote the logarithmic stellar mass \n at different times by orange text.\n Black dashed lines correspond to the photon number-weighted average of \\mbox{$f_{\\rm esc}$}\\ by that time (\\mbox{$\\left$}). \n Blue shaded regions display the sSFR in ${\\rm Gyr^{-1}}$. One can see that there is a delay between \n the peak in \\mbox{$f_{\\rm esc}$}\\ and sSFR due to the \n delay in the onset of the strong outflow. The middle panels show \n an example of this delay identified in the top panel (a,b). The projected density of gas and the fraction of \n ionized hydrogen are shown in both cases, as indicated in each panel. Interestingly, the volume filling \n fraction of the neutral hydrogen within 0.2 \\mbox{${R}_{\\rm vir}$}\\ is found to be 25\\% large in the snapshot (b), indicating that \n \\mbox{$f_{\\rm esc}$}\\ depends not only by the volume-filling, circumgalactic neutral gas, but also dense star\n forming gas. We do not display the physical quantities if $M_{\\rm vir}\\le10^8\\,\\mbox{${M}_\\odot$}$.\n }\n \\label{fig:ex}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=8.2cm]{fig5_1.eps}\n \\includegraphics[width=8.2cm]{fig5_2.eps}\n \\caption{ \n {\\it Left}: Escape fraction measured at the virial radius at three different redshifts from the $\\textsf{FR}$\\ run.\n Different redshifts are shown as different colors and symbols, as indicated in the legend. \n To increase the statistical significance, we combine the results \n from seven consecutive snapshots for each redshift. Solid lines indicate the median, and error bars show \n the interquartile range. Although there is a large scatter, more than 50\\% of the galaxies reveal $\\mbox{$f_{\\rm esc}$} \\lower.5ex\\hbox{$\\; \\buildrel > \\over \\sim \\;$} 10\\%$.\n {\\it Right:} Photons escaping per second through the virial sphere. \n }\n \\label{fig:fesc_stat}\n\\end{figure*}\n\nCosmological hydrodynamics simulations often suffer from the artificial over-cooling problem \nin forming disk galaxies \\citep[e.g.][]{kimm11b,hummels12}, mainly because the energy from SN \nexplosions is radiated away before it is properly transferred to momentum due to inadequate resolution\nof the multi-phase ISM. This directly affects the escape of ionizing photons. Motivated by this challenge, \nwe have implemented a SN feedback scheme that reasonably approximates the Sedov blast waves \nfrom the free expansion to snowplow stages. In Figure~\\ref{fig:mstar}, we present the baryon-to-star \nconversion efficiency ($f_{\\star}\\equiv M_{\\rm star}\/(\\Omega_{\\rm b} M_{\\rm vir}\/\\Omega_{\\rm m}$)\nof the central galaxies in dark matter halos at $z=7$ from the $\\textsf{FR}$\\ run. \nIt shows that our new physically motivated SN feedback is very effective at suppressing star formation. \nFor example, the most massive halo with $M_{\\rm vir}\\sim 3\\times10^{10}\\,\\mbox{${M}_\\odot$}$ at $z=7$ shows $f_{\\star}\\approx0.08$. \nAlthough the direct comparison may be difficult due to a different initial condition used, \nit is worth noting that the conversion efficiency is about a factor of 7 smaller than \nthat found in the {\\sc NutFB} run \\citep[][see Fig.13]{kimm11b}, shown as a star in Figure~\\ref{fig:mstar}. \nWe note that the momentum input from SN explosions used in the {\\sc NutFB} run is a factor of $3-4$ smaller \ncompared with that at the end of the cooling phase \\citep[see Appendix,][]{blondin98}.\nFor lower mass halos, the conversion efficiency is found to be even lower, \nreaching $M_{\\rm star} \/ M_{\\rm vir} \\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} 0.01 \\, \\Omega_{\\rm b} \/ \\Omega_{\\rm m} $ at $M_{\\rm vir} \\sim 10^9\\,\\mbox{${M}_\\odot$}$.\nIt is also interesting to note that the conversion \nefficiency at $M_{\\rm vir}\\ge 10^{10}\\mbox{${M}_\\odot$}$ also agrees reasonably well within error bars with the \nsemi-analytic results obtained to reproduce the observed stellar mass function, star formation rate, and \ncosmic star formation rate density \\citep[e.g.,][Figure~7]{behroozi13}.\nAs the feedback becomes more effective and fewer stars are formed, the stellar metallicity of these high-$z$ galaxies \nwould be lower. \nWe find that the most massive galaxy in our $z=7$ sample ($M_{\\rm star}=4\\times10^8\\,\\mbox{${M}_\\odot$}$) \nhas a stellar metallicity of 0.05 $Z_{\\rm \\odot}$. \nThis is at least factor of 2--3 smaller than the prediction by \\citet{finlator11} at the same epoch. \n\\citet{kimm13} also investigated UV properties of $z=7$ galaxies of stellar mass\n$5\\times10^8 - 3\\times10^{10}\\,\\mbox{${M}_\\odot$}$ using a SN energy-based feedback scheme,\nand found that stellar metallicities are generally higher than those found in the $\\textsf{FR}$\\ run. \n\\citet{kimm13} found that the stellar metallicity for galaxies of mass $4\\times 10^{8}\\mbox{${M}_\\odot$}$\nfalls in the range of $0.1-0.5Z_{\\rm \\odot}$. \nThe gas metallicities ($Z_{\\rm gas}$) are also different in the two simulations.\nThe gas metallicity of the ISM within $2.56$ kpc for the $4\\times 10^{8}\\mbox{${M}_\\odot$}$ galaxies\nis $0.083Z_{\\rm \\odot}$ in the FR run, which is about a factor of 3 lower, on average, \nthan that of \\citet{kimm13} ($Z_{\\rm gas}=0.1-0.7Z_{\\rm \\odot}$).\nThese comparisons lead us to conclude that our physically based feedback scheme is effective in \nalleviating the overcooling problem.\n\nOne may wonder whether stars form inefficiently in these small haloes \n($10^8\\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} M_{\\rm vir} \\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} 10^9\\,\\mbox{${M}_\\odot$}$) \nbecause gas accretion is suppressed due to the ionizing background radiation \\citep{shapiro94,thoul96,gnedin00b,dijkstra04,sobacchi13,noh14}. \nHowever, this is unlikely the case, given that galaxies in the atomic cooling halos \nare fed mainly by dense filaments and satellites at high redshift \\citep[e.g.,][]{powell11}, \nwhich are self-shielded from the background radiation \\citep{faucher-giguere10,rosdahl12}.\nEven in the absence of the self-shielding, \\citet{geen13} find no clear sign that reionization suppresses\nstar formation in such halos at $z>6$. \\citet{wise14} also show that the fraction of baryons \nin a $10^8$-$10^9\\,\\mbox{${M}_\\odot$}$ halo is reduced only by less than a factor of two compared with the cosmic mean in \ntheir cosmological radiation hydrodynamics simulations with thermal supernova feedback and reionization.\nIndeed, we confirm that our mechanical supernova feedback is \nprimarily responsible for the low conversion efficiency by directly comparing the stellar mass of the dwarf galaxies \nbetween the simulations with and without ionizing radiation (see the Appendix).\n\nWe now present the time evolution of star formation rate and ionizing photon escape fraction\nof two randomly chosen relatively massive galaxies in Figure~\\ref{fig:ex}.\nThe plot corroborates that the feedback from stars governs the evolution of galaxies. \nThe top and bottom panels show the evolution of specific star formation rate \n(sSFR$ \\equiv \\dot{M}_{\\rm star}\/M_{\\rm star}$) and instantaneous \\mbox{$f_{\\rm esc}$}\\ of the central galaxy \nin dark matter halos of mass $3\\times10^{10}$ and $10^{10}\\,\\mbox{${M}_\\odot$}$, respectively. \nThe SFR is computed by averaging the mass of newly formed stars over 3 Myr.\nIt is evident that star formation is episodic on a time scale of $10-30$ Myr with both\nthe frequency and oscillation amplitude decreasing with increasing stellar mass. \nThis means that SN explosions effectively \ncontrol the growth and disruption of star-forming clouds.\nWhen the galaxies are small ($t_{\\rm H} \\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} 0.5\\, {\\rm Gyr}$), the explosions even completely \nshut down the star formation across the galaxies, as stars form only in a few dense clouds. \nDuring these quiet periods, \\mbox{$f_{\\rm esc}$}\\ is kept high ($\\mbox{$f_{\\rm esc}$} \\lower.5ex\\hbox{$\\; \\buildrel > \\over \\sim \\;$} 0.2$). On the other hand, massive \ngalaxies contain many star-forming clumps, as can be seen in the projected density plot (middle row).\nThe fact that the episodic star formation history becomes more smooth at late times indicates that \nthese clumps are not entirely susceptible, but somewhat resilient to the SN explosions \narising from neighboring star clusters.\n\nMore importantly, we find that there is a time delay between the peak of \\mbox{$f_{\\rm esc}$}\\ and sSFR. This is \nessentially because massive stars with $M\\approx15\\,\\mbox{${M}_\\odot$}$ explode $\\sim$10 Myr after their \nbirth in our simulation. Let us suppose a dense cloud that just begins to form stars. \nSince the gas flow is usually convergent in these regions, the density of the gas will rise with time, and \nso does the SFR. This means that more and more massive stars will explode as time goes on.\nOnce enough SNe that can significantly redistribute the birth cloud go off, \nSFR will begin to drop, and \\mbox{$f_{\\rm esc}$}\\ will increase. Note that the increase in the number \nof SNe continues even after the peak of SFR, as massive stars live $\\sim$10 Myr.\nOnce the massive stars formed at the peak of SFR evolve off, star formation \nwill be further suppressed as a result of the destruction of the star-forming clouds, and strong \noutflows are likely to be produced, thus maximizing \\mbox{$f_{\\rm esc}$}.\nTherefore, the time delay stems from the interplay between the build-up of a non-coeval star cluster \nand subsequent SN explosions after the lifetime of the massive stars ($\\sim$ 10 Myr).\nThe projected density distributions of gas at two snapshots, \none of which displays the peak in sSFR (a) and the other shows the peak in \\mbox{$f_{\\rm esc}$}\\ (b), \nsubstantiates that it is indeed the strong outflow that elevates \\mbox{$f_{\\rm esc}$}\\ (middle row).\nWhen sSFR is at the peak value, the central galaxy appears relatively quiet (panel-(a)), whereas \nstrong outflows are seen when \\mbox{$f_{\\rm esc}$}\\ is highest and sSFR drops rapidly (panel-(b)).\nAs one can read from the figure, this mis-match of SFR and \\mbox{$f_{\\rm esc}$}\\ means that \na large amount of ionizing photons at the peak of SF are absorbed by their birth clouds.\nAlthough \\mbox{$f_{\\rm esc}$}\\ is high in the early time ($t_{\\rm H} \\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} 0.5\\,{\\rm Gyr}$), the photon number-weighted \nmean \\mbox{$f_{\\rm esc}$}\\ (dashed lines) stays at around $10\\%$ level in these two examples. \n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=8.6cm]{fig6_1.eps}\n \\includegraphics[width=8.6cm]{fig6_2.eps}\n \\caption{\n {\\it Left:} Photon production rate-weighted escape fraction, $\\mbox{$\\left$}$, \n averaged over the age of the universe ($t_{\\rm H}$) in the $\\textsf{FR}$\\ run. \n The effective escape fraction in different halo mass bins is shown as different color codings, as indicated in the legend.\n We also display the photon rate-averaged escape fraction of the whole sample at each \n snapshot ($\\mbox{$\\left$}(t)$) (black dotted line), as opposed to the time-averaged quantities (solid and dashed lines). \n We find the effective escape fraction to be $\\sim$10\\%, regardless of the halo mass and redshift. \n Altogether, 11.4\\% of the photons produced until $z=7$ have escaped from halos of $\\mbox{${M}_{\\rm vir}$}\\ge10^8\\,\\mbox{${M}_\\odot$}$.\n {\\it Right:} Relative contribution of halos of different mass ranges to the \n total number of ionizing photons measured at the virial radius. The contribution is computed by taking into \n account the cumulative number of photons produced and the cumulative number of photons escaped from \n halos of relevant mass range until $t\\le t_{\\rm H}$. \n \n \\label{fig:fesc_wei}\n\\end{figure*}\n\nWe present statistical results of the escape fraction in Figure~\\ref{fig:fesc_stat}.\nSince there are a limited number of galaxies in our simulated volume and \\mbox{$f_{\\rm esc}$}\\ varies \nsignificantly on $\\sim$10 Myrs, we compute the median and interquartile range of \\mbox{$f_{\\rm esc}$}\\ by combining \nthe results from seven consecutive snapshots spanning 21 Myrs. Several features can be gleaned from this figure.\nFirst, although there is a considerable scatter, high-$z$ galaxies exhibit a high \\mbox{$f_{\\rm esc}$}\\ on the order of 10\\%,\nwhich is normally required by semi-analytic calculations of reionization to ionize the universe \nby $z\\sim6$ \\citep{wyithe07,shull12,robertson13}.\nSecond, there is a hint that photons can escape more easily in the galaxies hosted by lower mass halos.\nWe attribute this to the fact that feedback from stars efficiently destroys a few star-forming clouds that are \nresponsible for the total SF in smaller halos, as opposed to larger ones in which young massive stars are \nburied in many star-forming clouds that are relatively resilient to the SN feedback arising \nfrom neighboring star clusters.\nAs shown in the top and bottom panels of Figure~\\ref{fig:ex},\nwhen galaxies are small, the entire star formation can be suppressed due to the energetic outflows driven by \nSN explosions.\nThird, we find that \\mbox{$f_{\\rm esc}$}\\ is slightly higher at lower redshift for a given halo mass, consistent with \\citet{Paardekooper13}.\nThis is essentially because the mean density of the gas is smaller at lower redshift, and the impact from SNe becomes \nmore effective.\n\n\n\nNote that high \\mbox{$f_{\\rm esc}$}\\ does not necessarily mean that more photons would leave their host halo. \nStar clusters older than $\\sim$ 5 Myr would not contribute \nsignificantly to the total ionizing photon budget even if their \\mbox{$f_{\\rm esc}$}\\ is 1. The more relevant quantity for \nreionization should take into account the photon production rate, and we find that the (weak) redshift \ndependence of \\mbox{$f_{\\rm esc}$}\\ disappears when the photon escape rate is plotted (right panel in Figure~\\ref{fig:fesc_stat}).\nSince the instantaneous measurement of \\mbox{$f_{\\rm esc}$}\\ could be misleading,\nwe also present the photon production rate-weighted, time-averaged escape fraction, \n$\\mbox{$\\left$} (\\le t_{\\rm H}) \\equiv \\int_0^{t_{\\rm H}} \\dot{N}_{\\rm ion}(t) f_{\\rm esc}(t) dt \/ \\int_0^{t_{\\rm H}} \\dot{N}_{\\rm ion}(t) dt,$\nin Figure~\\ref{fig:fesc_wei} (left panel). \nThis is a better quantity to be used for the semi-analytic calculations \nof reionization than \\mbox{$f_{\\rm esc}$}\\ from Figure~\\ref{fig:fesc_stat}.\nOverall, we find that the time-averaged escape fraction at $z=7$ is around $\\sim$ 10\\%, \nregardless of the halo mass in the range considered.\nAlso included as the black dotted line in Figure~\\ref{fig:fesc_wei} is the photon production rate-weighted average of \\mbox{$f_{\\rm esc}$}\\ \nof all the samples at different times ($\\mbox{$\\left$}(t)$). Again, the value is found to fluctuate around 10\\%, \nbut no clear sign of redshift dependence is detected. \n\nThe relative contributions from halos of different masses to the total escaping ionizing photons are \ncompared in Figure~\\ref{fig:fesc_wei} (right panel).\nAs the small structures form first in the $\\Lambda$CDM universe, the small halos of mass \n$\\mbox{${M}_{\\rm vir}$} \\le 10^{8.5}\\,\\mbox{${M}_\\odot$}$ dominate down to $z\\sim9$. \nMore massive halos and galaxies emerge later, and their cumulative contribution \nbecomes comparable with that of the smallest halos ($\\mbox{${M}_{\\rm vir}$} \\le 10^{8.5}\\,\\mbox{${M}_\\odot$}$) by $z=7$. \nIn our simulations, 14 most massive halos supply more ionizing photons than 556 smallest halos with $\\mbox{${M}_{\\rm vir}$} \\le 10^{8.5}\\,\\mbox{${M}_\\odot$}$ at $z=7$.\nThis is mainly because $f_{\\star}$ is much higher in the more massive halos than \nin the small halos, while the effective escape fraction is similar.\nThe typical number of escaping photons per second in halos with $\\mbox{${M}_{\\rm vir}$}\\sim10^{8.5}\\,\\mbox{${M}_\\odot$}$ is \n$f_{\\rm esc}\\,\\dot{N}_{\\rm ion}\\sim10^{49}\\,{\\rm s^{-1}}$, whereas the number can increase up to \n$f_{\\rm esc}\\,\\dot{N}_{\\rm ion}\\sim10^{52}\\,{\\rm s^{-1}}$ in the most massive halos ($\\mbox{${M}_{\\rm vir}$} > 10^{10}\\,\\mbox{${M}_\\odot$}$)\n(Figure~\\ref{fig:fesc_stat}, right panel). \nNotice, however, that this does not necessarily translate to their relative role to the reionization of the universe.\nSmall halos at high redshift may make a more significant contribution\nto the Thompson optical depth \\citep{wyithe07,shull12,kuhlen12,robertson13}.\n\n\nIt is noted that the recombination timescale corresponding to the mean \ndensity of the universe at $z\\sim10$ ($n_{\\rm H}\\sim10^{-3}\\,{\\rm cm^{-3}}$) is relatively long \n($\\sim$ 50--100 Myr)\\footnote{Given that gas accretion is mostly filamentary \n\\citep[e.g.][]{ocvirk08,dekel09,kimm11,stewart11a}, the actual density of the gas that occupies \nmost of the volume in the halo is likely to be even lower than the mean density of the universe, \nand the recombination timescale could be longer.}, and thus the halo gas around a galaxy \nmay be kept partially ionized even though it is irradiated by the galaxy intermittently.\nFigure~\\ref{fig:ex} (the second panel in the middle row) indeed shows that a large fraction of the \nIGM in the vicinity of the central galaxy is largely ionized despite the fact that instantaneous \\mbox{$f_{\\rm esc}$}\\ is low. \nAlthough we do not include the whole distribution \nof the ionized hydrogen inside the halo, we confirm that the halo gas between 2 kpc and 12 kpc (virial radius) \nis fully ionized apart from the small region taken by cold filamentary gas.\nIn fact, the volume filling fraction of the neutral hydrogen ($f_{\\rm v}$) \ninside $0.2\\,\\mbox{${R}_{\\rm vir}$}$ ($\\sim$2.3 kpc) is found to be $\\sim$ 25\\% larger in the snapshot (b) ($f_{\\rm v}\\approx0.04$) \nthan that in the snapshot (a), suggesting that dense star-forming gas plays a more important role \nin determining the escape fraction than volume-filling diffuse neutral gas. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8cm]{fig7.eps}\n \\caption{ Effective optical depth in the Lyman continuum ($\\tau_{\\rm eff}$) by the gas in the vicinity of each star \n ($<$100 pc) in galaxies with a low escape fraction ($f_{\\rm esc} < 0.1$) at $z\\sim8$ from the $\\textsf{FR}$\\ run. We cast 768 rays \n uniformly distributed across the sky for individual star particles and combine the absorption of Lyman continuum \n by neutral hydrogen at the distance of 100 pc from each star to obtain the effective optical depth. Different color codings \n display the distribution in different halo mass bins, as indicated in the legend. The dashed lines indicate the \n photon production rate-weighted average of the effective optical depth. \n Again, we combine the results from seven consecutive snapshots to increase the sample size.\n We find that $\\tau_{\\rm eff,100pc}$ is generally \n large (2 -- 4) for the galaxies with the low escape fraction, indicating that the nearby gas alone could reduce the \n number of ionizing photons by 7 -- 45. This demonstrates that the ISM should be properly \n resolved to better understand the escape of ionizing photons.\n }\n \\label{fig:tau}\n\\end{figure}\n\n\nFigure~\\ref{fig:tau} demonstrates the importance of resolving the ISM in predicting the escape \nof ionizing photons. In order to estimate the optical depth by neutral hydrogen in the vicinity of \neach star particle ($<$ 100 pc), we spawn 768 rays per particle using the {\\sc Healpix} algorithm \\citep{gorski05}.\nEach ray carries the spectral energy distribution determined by the age and mass of the star particle \\citep{leitherer99}. \nAs the ray propagates, we compute the absorption of the Lyman continuum by neutral hydrogen as, \n$F_{\\rm abs} (\\nu) = F_{\\rm int} (\\nu) \\exp{\\left[-\\tau_{\\rm HI} (\\nu)\\right]}$,\nwhere $\\tau_{\\rm HI}$ ($=N_{\\rm HI} \\sigma_{\\rm HI}$) is the optical depth and $\\sigma_{\\rm HI}$ is the hydrogen ionization\ncross section \\citep{osterbrock06}\nWe then combine the attenuated spectral energy distributions propagated out to 100 pc from each star particle, \nand measure the remaining number of ionizing photons ($N_{\\rm ion,tot}^{\\rm final}$) per galaxy. \nThis is compared with the initial number of ionizing photons ($N_{\\rm ion,tot}^{\\rm int}$) to obtain the effective \noptical depth as $\\tau_{\\rm eff, 100pc} \\equiv \\ln \\left(N_{\\rm ion,tot}^{\\rm int} \/ N_{\\rm ion,tot}^{\\rm final} \\right)$.\nFigure~\\ref{fig:tau} shows the distribution of the effective optical depth by the nearby gas for the galaxies with a \nlow escape fraction ($\\mbox{$f_{\\rm esc}$} < 0.1$) at $z\\sim8$. We find that $\\tau_{\\rm eff,100pc}$ shows a wide distribution ranging from \n0.01 to $\\sim$ 100, with the photon production rate-weighted averages of $\\tau_{\\rm eff,100pc}=$ 3.8 and 1.9 for less \n($10^8 < \\mbox{${M}_{\\rm vir}$} \\le 10^9\\,\\mbox{${M}_\\odot$}$) and more massive ($10^9 < \\mbox{${M}_{\\rm vir}$} \\le 10^{10.5}\\,\\mbox{${M}_\\odot$}$) halo groups, respectively. \nThis indicates that the number of escaping photons is reduced by a factor of $7-45$ due to the gas near young stars \nin galaxies with the small \\mbox{$f_{\\rm esc}$}. In this regard, one may find it reconcilable that \nresults from cosmological simulations with limited resolutions \\citep[e.g.,][]{fujita03,razoumov10,yajima11} \noften give discrepant results.\n\nTo summarize, we find that there is a time delay between the peak of star formation activity and the escape fraction\ndue to the delay in the onset of effective feedback processes that can blow birth clouds away. \nBecause of the delay, only 11.4 \\% of the ionizing photons could escape from their host halos \nwhen photon production rate-averaged over all halos at different redshifts, despite the fact that \nthe instantaneous \\mbox{$f_{\\rm esc}$}\\ could reach a very high value temporarily. Halos of different masses \n($8\\le \\log \\mbox{${M}_{\\rm vir}$}\\le10.5$) contribute comparably per logarithmic mass interval to reionization, and \na photon production rate-averaged escape fraction ($\\mbox{$\\left$}(t)$) shows a weak dependence on redshift \nin the range examined \\citep[c.f.,][]{kuhlen12}.\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.6cm]{fig8.eps}\n \\caption{Difference in environment where runaway and non-runaway stars younger than 5 Myr are located.\n Approximately $2\\times10^5$ stars from the most massive galaxy at $z=7$ are used to plot the histograms. \n It can be seen that runaway stars tend to be located in less dense regions than non-runaway stars.\n }\n \\label{fig:nH_runaway}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7.5cm]{fig9.eps}\n \\caption{Comparison of the temperature distribution in the run without (top, $\\textsf{FR}$) and with \n runaway OB stars (bottom, $\\textsf{FRU}$) at $z=10.2$. The white bar measures 100 kpc (proper).\n The $\\textsf{FRU}$\\ run shows bigger hot bubbles (30\\%) with $T\\ge10^5\\,K$ than the $\\textsf{FR}$\\ run,\n suggesting that runway OB stars affect the regulation of star formation.\n }\n \\label{fig:tem}\n\\end{figure}\n\n\n\n\\subsection{Escape Fraction Enhanced by Runaway OB Stars}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=8.1cm]{fig10_1.eps}\n \\includegraphics[width=8.5cm]{fig10_2.eps}\n \\caption{Impact of the inclusion of runaway OB stars on the escape fraction. {\\it Left:} Instantaneous escape fraction measured \n at the virial radius. Different color codings display different redshifts, as indicated in the legend.\n The median \\mbox{$f_{\\rm esc}$}\\ from the $\\textsf{FRU}$\\ run (with runaway OB stars) and the $\\textsf{FR}$\\ run are shown as solid and dotted lines, respectively. \n The shaded regions mark the interquartile range of \\mbox{$f_{\\rm esc}$}\\ from the $\\textsf{FRU}$\\ run. It can be seen that runaway OB stars tend to \n increase the escape probability of ionizing photons. \n {\\it Right:} Photon production rate-weighted escape fraction, $\\mbox{$\\left$}$, \n averaged over the \n age of the universe ($t_{\\rm H}$). The black lines include the whole sample of the simulation, \n while the results in different halo mass bins are presented as dashed lines with different colors. \n The solid and dashed lines show the time-averaged $\\mbox{$\\left$}$, while the dotted line \n shows a measurement of $\\mbox{$\\left$}$ for all halos at each snapshot. \n The time-averaged escape fraction of $\\mbox{$\\left$}$ measured at $z=7$ \n is 13.8\\% in this simulation. We find that the inclusion of runaway OB stars \n increases the escape of ionizing photons by 22\\% by $z=7$, compared with that from the $\\textsf{FR}$\\ run.\n }\n \\label{fig:fesc_runaway}\n\\end{figure*}\n\nIonizing photons can not only escape from their birth clouds by destroying them through feedback processes,\nbut also emerge from runaway OB stars displaced from the birth clouds.\nIf we take the typical velocity of the runaway OB stars \n$\\sim\\,40\\,{\\rm km\\,s^{-1}}$ \\citep{stone91,hoogerwerf01,tetzlaff11}, \nthey could travel a distance of $\\sim$ 200 pc in 5 Myr. \n\\citet{conroy12} examined the possible ramification of the inclusion of the runaway OB stars \nusing a simple analytic formulation, and concluded that \\mbox{$f_{\\rm esc}$}\\ can be enhanced by a factor of \nup to 4.5 from $\\mbox{$f_{\\rm esc}$}\\approx0.02-0.04$ to $\\mbox{$f_{\\rm esc}$}\\approx0.06-0.18$ \nin halos of mass $10^8 \\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} \\mbox{${M}_{\\rm vir}$} \\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} 10^{9}\\,\\mbox{${M}_\\odot$}$.\nGiven the complexity of the ISM dynamics \\citep[e.g.][]{mckee07}, \nit would seem prudent to examine this issue in greater details in realistic environments.\nTo do so, we have performed a twin cosmological simulation of \nthe $\\textsf{FR}$\\ run by designating 30\\% of mass in each stellar particle as a separate runaway particle \nand dynamically follow their motion.\n\n\nFigure~\\ref{fig:nH_runaway} shows an example of the difference in environment \nbetween runaway and non-runaway particles in a galaxy in a $3\\times10^{10}\\,\\mbox{${M}_\\odot$}$ halo at $z=7$. \nAt this redshift, the central galaxy shows $\\mbox{$f_{\\rm esc}$}=0.14$.\nThe average hydrogen number density for runaways younger than 5 Myr ($n_{\\rm H}\\sim130\\,{\\rm cm^{-3}}$) is found to be \nroughly 20 times smaller than that of non-runaways ($n_{\\rm H}\\sim3000\\,{\\rm cm^{-3}}$).\nGiven that these stars will explode in the next 5--10 Myrs, the fact that the local density of some runaway OB stars \nis smaller than non-runaways suggests that the impact from SN explosions will be enhanced.\nIndeed, we find that the stellar mass of the galaxies in halos of mass $\\mbox{${M}_{\\rm vir}$}\\gtrsim10^9\\,\\mbox{${M}_\\odot$}$ is smaller by a factor \nof 1.7 on average, compared with that from the $\\textsf{FR}$\\ run (see Figure~\\ref{fig:mstar}). \nFor galaxies in smaller halos, there is no clear hint that the runaway OB stars help suppress the star formation.\nThis is partly because runaway OB stars can not only provide energy but also distribute metals more efficiently, \nwhich can increase the cooling rate in halos. Comparison of the temperature distribution between the \ntwo runs further substantiates the claim that runaway OB stars help regulate the star formation (Figure~\\ref{fig:tem}). \nThe volume of $T\\ge10^{5}\\,{\\rm K}$ gas inside the zoomed-in region in the $\\textsf{FRU}$\\ run ($\\approx$ 7 kpc$^3$, physical)\nis 30\\% larger than that in the $\\textsf{FR}$\\ run.\n\nThe left panel in Figure~\\ref{fig:fesc_runaway} shows the instantaneous \\mbox{$f_{\\rm esc}$}\\ measured \nat three different redshifts from the $\\textsf{FRU}$\\ run. Again, less massive galaxies tend to exhibit a \nhigher \\mbox{$f_{\\rm esc}$}, which can be attributed to the fact that star formation in smaller halos is more easily affected \nby the energetic explosions. As expected, the inclusion of the runaway OB stars \nincreases the instantaneous escape fraction on average. The photon production rate-weighted average \nof \\mbox{$f_{\\rm esc}$}\\ (right panel in Figure~\\ref{fig:fesc_runaway}) shows this more clearly. In our fiducial run ($\\textsf{FR}$), 11.4\\% of \nthe ionizing photons produced escaped from the halos of mass $\\mbox{${M}_{\\rm vir}$}\\ge10^8\\,\\mbox{${M}_\\odot$}$ at $z\\ge7$. \nOn the other hand, the $\\textsf{FRU}$\\ run yields higher $\\mbox{$\\left$}$ of 13.8\\%, \nwhich is enhanced by 22\\% compared with that of the $\\textsf{FR}$\\ run.\nAlthough this increase is not as large as claimed in \\citet{conroy12}, \nthe contribution from the runaway OB stars is certainly significant. \nSimilarly as in the $\\textsf{FR}$\\ run, no clear dependence of $\\mbox{$\\left$}$ on halo mass is found.\n\nIt is interesting to discuss possible origins of the significantly different enhancement in the escape fraction\ndue to runaway OB stars found in our simulations compared with the estimate by \\citet{conroy12}. \nFirst, while their model predicts \\mbox{$f_{\\rm esc}$}\\ of non-runaways to be about 2--4\\% in halos of mass \n$10^8 \\le \\mbox{${M}_{\\rm vir}$} \\le 10^9 \\, \\mbox{${M}_\\odot$}$, we find that the self-regulation of star formation via SN explosions \nleads to a high escape of $\\sim$ 10\\% in our fiducial model ($\\textsf{FR}$). \nSecond, while their model finds that runaway OB stars are found to have high $\\mbox{$f_{\\rm esc}$}$ (=30--80\\%), \nour results imply that the mean escape fraction of ionizing photons from runaway OB stars\nis about $20\\%$ ($11.4\\%\\times 70\\% + {\\it 20\\%}\\times 30\\%\\approx13.8\\%$).\nWe also make a more elaborate estimate as follows.\nWe measure the optical depth in the Lyman continuum for the gas inside each halo along 768 sightlines \nper star particle, and combine the attenuated spectral energy distributions. These are used to count \nthe number of hydrogen ionizing photons for runaways and non-runaways separately. \nWe find that the relative contribution from the runaways to the total number of escaping photons is\ncomparable with that of the non-runaways. Considering that the runaway particle is assumed to explain only 30\\% of \nall the OB stars, the net $\\mbox{$f_{\\rm esc}$}$ for the runaways can be estimated to be roughly 23\\% ($=13.8\\%\/2\/0.3$). \nThis is twice higher chance of escaping than the non-runaways, but much smaller than computed in \nthe analytic model. If the escape fraction of non-runaway OB stars were 2\\% in our simulations, \nthe total escape fraction would become $2\\%\\times 70\\% + 23\\%\\times 30\\%=8.3\\%$, \ncorresponding to an increase of a factor of 4.2.\nIt is thus clear that most of the discrepancies arise in a large part due to different escape fraction values \nfor non-runaway OB stars and also due to different escape fraction values for runaway OB stars.\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.6cm]{fig11.eps}\n \\caption{Balance between the ionizing photons escaping from the dark matter halo and the recombination rate \n in the $\\textsf{FRU}$\\ run.\n The thick grey line shows the balance condition when the clumping of $C_{\\rm HII}=3$ is used. \n Enough photons to keep the universe ionized escape from the halo after $z\\sim8$.\n }\n \\label{fig:budget}\n\\end{figure}\n\nAlthough $\\mbox{$\\left$}$ is 22\\% larger in the $\\textsf{FRU}$\\ run than $\\textsf{FR}$, the cumulative number of photons escaped in halos \nwith $\\mbox{${M}_{\\rm vir}$}\\ge10^8\\,\\mbox{${M}_\\odot$}$ by $z=7$ ($N_{\\rm ion}\\approx1.3\\times10^{69}$) is found to be similar to that of the $\\textsf{FR}$\\ run ($N_{\\rm ion}\\approx1.6\\times10^{69}$).\nThis is because star formation is suppressed in relatively massive halos ($\\mbox{${M}_{\\rm vir}$} \\ge 10^9\\,\\mbox{${M}_{\\rm vir}$}$).\n\n\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.6cm]{fig12.eps}\n \\caption{Rest-frame ultraviolet luminosity function from the $\\textsf{FRU}$\\ run at $z=7$. Error bars denote the Poissonian error.\n Observational data from \\citet{bouwens11a} and \\citet{mclure13} are shown as the shaded region \n and empty squares, respectively. Also included as solid and dashed lines are the Schechter fits to the \n data provided in these studies.\n }\n \\label{fig:uvlf}\n\\end{figure}\n\nOne question is whether or not enough photons escape to keep the \nuniverse at $z\\sim7$ ionized. The critical photon rate density that can balance the recombination of ionized hydrogen is \n\\begin{equation}\n\\dot{n}_{\\rm ion}^{\\rm crit} = \\alpha_{\\rm B} \\, n_e \\, n_{\\rm HII} \\simeq 10^{47.2} C_{\\rm HII} (1+z)^3 \\, {\\rm [s^{-1}\\,Mpc^{-3}]},\n\\end{equation}\nwhere $\\alpha_B$ is the case B recombination coefficient, \n$n_e$ is the number density of electron, $n_{\\rm HII}$ is \nthe number density of ionized hydrogen, and $C_{\\rm HII} \\equiv \\left\/\\left^2$ is the \nclumping factor of ionized gas. For a choice of the clumping factor $C_{\\rm HII}\\sim3$ \\citep{pawlik09,raicevic11} and \nthe temperature $T=20000K$, $\\dot{n}_{\\rm ion}^{\\rm crit} = 10^{50.4}\\,[(1+z)\/8]^3 \\, {\\rm s^{-1}\\,Mpc^{-3}}$.\nFigure~\\ref{fig:budget} shows that the escaped photons in $\\textsf{FRU}$\\ can balance the recombination at $z\\le 9$. \nWe find that the photon rate density at $z\\sim7$ is $\\dot{n}_{\\rm ion}=10^{50.7-50.9} \\, {\\rm s^{-1}\\,Mpc^{-3}}$,\nconsistent with observational findings. \\citet{ouchi09} estimated the ionizing photon density to be \n$\\log \\dot{n}_{\\rm ion} \\simeq 49.8 - 50.3$ by integrating the UV luminosity function (UVLF) down to $M_{\\rm UV}=-18$ (lower) \nor $L=0$ (upper estimate) with a slope of $\\alpha=-1.72$ at $z\\sim7$ with $\\mbox{$f_{\\rm esc}$}=20\\%$. \nIf the slope found in the more recent literature \\citep{mclure13}, $\\alpha=-1.90$, \nis used, the maximum photon rate density derived would increase to $\\log \\dot{n}_{\\rm ion} \\simeq 50.8$,\nwhich is in agreement with our estimation. Note that the photons escaping from halos of mass \n$\\mbox{${M}_{\\rm vir}$}\\ge10^8\\,\\mbox{${M}_\\odot$}$ account for more than 90\\% of the total escaping photons \nif the baryon-to-star conversion efficiency derived in our simulation \nis extrapolated to smaller halos ($\\mbox{${M}_{\\rm vir}$}<10^{8.5}\\,\\mbox{${M}_\\odot$}$, see below), \nand hence our results should be compared with the maximum photon rate density.\nGiven that their chosen \\mbox{$\\left$}\\ is closed to what our simulation yields (13.8\\%), \nthe agreement implies that SFRs of the galaxies are well reproduced in our simulation. Indeed, \nwe find that our simulated UVLF measured at 1500\\AA\\ (rest-frame) shows excellent agreement with \nthe LF with the slope of $\\alpha=-1.90$ \\citep{mclure13} down to $M_{\\rm 1500}=-13$ (Figure~\\ref{fig:uvlf}).\nHere we neglect the effect of dust extinction, as the galaxies in our sample are very metal-poor \n($Z_{\\rm star}\\lesssim10^{-3}$).\n\n\n\\begin{table} \n\\caption{Photon number-weighted $f_{\\rm esc}$ at $7\\le z \\lower.5ex\\hbox{$\\; \\buildrel < \\over \\sim \\;$} 15$ from the FRU run}\n\\centering\n\\begin{tabular}{@{}cc}\n\\hline \n$\\log M_{\\rm vir}$ & $\\left$ \\\\\n\\hline \n8.25 & 0.144 $\\pm$ 0.038\\\\\n8.75 & 0.146 $\\pm$ 0.064\\\\\n9.25 & 0.148 $\\pm$ 0.077\\\\\n9.75 & 0.128 $\\pm$ 0.069\\\\\n10.25 & 0.113 $\\pm$ 0.079\\\\\n\\hline \n\\label{table2}\n\\end{tabular} \n\\end{table}\n\nIn Figure~\\ref{fig:cstar}, we plot the product of photon number-weighted escape fraction ($\\mbox{$\\left$}$) and \n baryon-to-star conversion efficiency ($f_{\\star}\\equiv \\Omega_{\\rm m} M_{\\rm star}\/\\Omega_{\\rm b} M_{\\rm vir}$) \n at $z=7$. Notice that we include all stars within the virial radius of a dark matter halo \n in this measurement. Since there is little evolution in $\\mbox{$\\left$}$ with redshift (Figure~\\ref{fig:fesc_runaway}, \n right panel), we combine $\\mbox{$\\left$}$ of the halos in the same mass \n range at $7\\le z < 20$ to obtain the mean escape fraction as a function of halo mass (Table~\\ref{table2}).\n We then use a simple fit to the mean, as\n \\begin{equation}\n \\log \\mbox{$\\left$} (\\mbox{${M}_{\\rm vir}$}) \\approx -0.510 - 0.039 \\log \\mbox{${M}_{\\rm vir}$}.\n \\label{fescg_fit}\n \\end{equation}\nWe limit our fit to the sample with $\\mbox{${M}_{\\rm vir}$}\\ge10^{8.5}\\,\\mbox{${M}_\\odot$}$, where each halo is \n resolved with $\\sim$ 2000 dark matter particles and more.\nThere is a trend that more massive halos contribute more to the total number of ionizing photons per mass,\nwhich essentially reflects the fact that low-mass halos are inefficient in forming stars \n(see also Figure~\\ref{fig:mstar}). The average $\\mbox{$\\left$} f_{\\star}$ of different halo masses can be \nfitted with \n\\begin{equation}\n\\log \\mbox{$\\left$} f_{\\star} \\approx -7.342 + 0.474\\, \\log \\mbox{${M}_{\\rm vir}$},\n \\label{fstar_fit}\n\\end{equation}\nshown as the red dashed line in Figure~\\ref{fig:cstar}.\nWe note that $\\mbox{$\\left$} f_{\\star}$ becomes as low as $\\sim 5\\times10^{-4}$ \nin small halos ($\\mbox{${M}_{\\rm vir}$}\\sim10^{8.5}\\,\\mbox{${M}_\\odot$}$), which is roughly 40 times smaller than the results \nfrom \\citet{wise09} ($\\mbox{$\\left$} f_\\star \\approx0.02$). \nThe difference can be attributed to two factors. \nFirst, our \\mbox{$\\left$}\\ is smaller by a factor of $\\sim3-4$ than that of \\citet{wise09}. \nThis is probably due to the fact that their cosmological runs start from the initial condition \nextracted from adiabatic simulations in which no prior star formation is included. \nSince radiative cooling and star formation are suddenly turned on at some redshift,\nthe gas in the halo rapidly collapses and forms too many stars in their cosmological runs.\nThis is likely to have resulted in stronger starbursts in the galaxies, leading to a higher escape probability. \nSecond, because of the same reason, $f_{\\star}$ is considerably higher in the \\citet{wise09} halos than in our halos. \nFor halos of masses with $\\mbox{${M}_{\\rm vir}$}\\sim10^{8.5}\\,\\mbox{${M}_\\odot$}$, we find that $f_{\\star}\\approx0.003$, \nwhich is smaller by a factor of $\\sim 10$ than those in \\citet{wise09}. \nIndeed, we find fairly good agreement with the latest determination of $\\mbox{$\\left$} f_{\\star}$ in \nhalos of $\\mbox{${M}_{\\rm vir}$}\\sim10^{8.5}$ by \\citet{wise14},\nwho model star formation self-consistently in their cosmological radiation hydrodynamics simulations.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.6cm]{fig13.eps}\n \\caption{Product of the stellar mass fraction within the virial radius of a dark matter halo \n ($f_\\star=\\Omega_{\\rm m} \\mbox{${M}_{\\rm star}$} \/ \\Omega_{\\rm b} \\mbox{${M}_{\\rm vir}$}$) at $z=7$ \n and halo mass-dependent photon production rate-averaged escape fraction from the cosmological simulation with \n runaway OB stars ($\\textsf{FRU}$).\n Averages are shown as red empty squares, with the simple regression (dashed line). \n A smaller number of photons is escaped per unit mass in smaller halos, reflecting the results that\n star formation is inefficient in the low-mass halos. \n }\n \\label{fig:cstar}\n\\end{figure}\n\nIt is worth mentioning that adopting high spatial resolution (or gravitational softening length) \nis important to accurately predict the escape fraction. If the resolution is not high enough to capture \nthe rapid collapse of gas clouds, the resulting star formation histories would become less episodic, \nleading to a longer time delay between the peak of star formation and escape fraction. \nThis in turn would reduce the fraction of escaping photons. To examine this issue, \nwe run two additional simulations with the identical initial condition and other parameters, \nbut with one less or more level of refinement, corresponding to 8.5 pc or 2.1 pc (physical) resolution, respectively.\nWe find that the run with the lower resolution yields a factor of two smaller mean escape fraction at z=9 \n($\\mbox{$\\left$}=7.6\\%$, see Appendix). On the contrary, higher resolution run exhibits a comparable mean \nescape fraction of $\\mbox{$\\left$}=13.9\\%$ at $z=10$, suggesting that the results are reasonably converged \nfor the parameters used in the $\\textsf{FRU}$\\ run.\n\n\n\\section{Discussion}\n\nRecent studies show that the escape fraction should be larger than 20\\% to re-ionize the \nuniverse by $z=6$ matching the Thomson optical depth inferred from the CMB \n\\citep{kuhlen12,shull12,robertson13}. This can be obtained by numerically solving the simple differential \nequation for the \\mbox{{\\sc H ii}}\\ bubble\n\\begin{equation}\n\\frac{d Q_{\\rm HII} }{dt} = \\frac{\\dot{n}_{\\rm ion}}{\\left} - \\frac{Q_{\\rm HII}}{t_{\\rm rec}(C_{\\rm HII})},\n\\end{equation}\nwhere $Q_{\\rm HII}$ is the volume filling fraction of the bubble, $\\left< n_{\\rm H}\\right>$ is the comoving mean density \nof the universe, and \n$t_{\\rm rec} (C_{\\rm HII})= \\left[ C_{\\rm HII}\\,\\alpha_{\\rm B}(T)\\, f_e\\, \\left \\,(1+z)^3\\right]^{-1}$ is the \nrecombination timescale for a given clumping factor and temperature. Here $f_e$ is a correction factor that accounts \nfor the additional contribution of singly ($z>4$) or doubly ($z<4$) ionized helium to \nthe number density of electron \\citep[e.g.,][]{kuhlen12}. \nWe adopt a redshift-dependent clumping factor of $C_{\\rm HII} = 1 + \\exp(-0.28\\, z +3.59)$ at $z\\ge10$ or\n$C_{\\rm HII} = 3.2$ at $z<10$ following \\citet{pawlik09}.\nOnce $Q_{\\rm HII}$ is determined, the Thomson optical depth \nas a function of redshift can be calculated as \n\\begin{equation}\n\\tau_e (z)= \\int_0^z c \\left\\,\\sigma_T\\,f_e\\,Q_{\\rm HII}(z') \\frac{(1+z')^2 dz'}{H(z')},\n\\end{equation}\nwhere $\\sigma_T$ is the Thomson electron cross section, and $H(z)$ is the Hubble parameter.\nWe follow the exercise by using the ionizing photon density from Figure~\\ref{fig:budget} to examine \nwhether our models provide a reasonable explanation for the reionization history.\nFor $\\dot{n}_{\\rm ion}$ at $z<7$, we extrapolate based on the simple fit to the results in Figure~\\ref{fig:budget}. \nThis simple experiment indicates that the universe can be re-ionized by $z=7.25$. \nHowever, the evolution of the photon density from the $\\textsf{FRU}$\\ run predicts a smaller volume filling fraction of the \\mbox{{\\sc H ii}}\\ bubble\nat $z=10$ ($Q_{\\rm HII}=12\\%$), compared with other analytic models \\citep[$Q_{\\rm HII}\\gtrsim20\\%$, e.g.][]{shull12} \nthat could reproduce the CMB measurement \\citep[$\\tau_e\\sim0.09$,][]{komatsu11}. Consequently, the FRU run yields \nthe Thomson optical depth of $\\tau_e=0.065$, which is consistent only within 2$\\sigma$ with the \nCMB measurement. This implies that more ionizing photons are required to escape from halos at high redshift \nto explain the reionization history of the Universe.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=8.6cm]{fig14.eps}\n \\caption{ Importance of the dwarf galaxy population to the Thomson optical depth measurement \n in semi-analytic calculations. The top and middle panels show the escape fraction and \n stellar mass inside the virial radius of a halo as a function of halo mass, respectively, \n which are used to compute the optical depth (the bottom panel). The measurements from \n our radiation cosmological simulations with runaway stars ($\\textsf{FRU}$) are shown as \n blue filled squares with the standard deviations. Empty squares with error \n bars are the results from \\citet{wise14}. The optical depth is obtained \n by taking into account the escaping ionizing photons from halos more massive than $\\mbox{${M}_{\\rm vir}$}$.\n We neglect the contribution from rare massive halos with $\\mbox{${M}_{\\rm vir}$}>10^{12}\\,\\mbox{${M}_\\odot$}$. \n Different colors in the bottom panel corresponds to the results with different assumptions on \n the stellar-to-halo mass relation for minihalos, as indicated in the middle panel.\n The shaded region denotes the Thomson optical depth inferred from the Planck+WMAP\n measurements.\n }\n \\label{fig:tau_es}\n\\end{figure}\n\n\nThe deficiency of ionizing photons may in part be attributed to the fact that \nour simulations cannot resolve the collapse of small-mass halos ($\\mbox{${M}_{\\rm vir}$}\\lesssim10^8\\,\\mbox{${M}_\\odot$}$)\ndue to finite mass resolution. \\citet{Paardekooper13} argue that reionization is driven by \ndwarf-sized halos of masses $\\mbox{${M}_{\\rm vir}$}=10^7-10^8\\,\\mbox{${M}_\\odot$}$ with high $\\mbox{$\\left$}$ of $\\approx$0.4--0.9.\nSimilarly, \\citet{wise14} find that the ionizing photons from the minihalos with \n$\\mbox{${M}_{\\rm vir}$}=10^{6.25}-10^{8.25}\\,\\mbox{${M}_\\odot$}$ is crucial at reproducing the Thompson optical \ndepth from the CMB measurements. In order to examine the importance of the minihalos \nin light of our new results, we estimate the optical depth as a function of the minimum halo mass\nthat can contribute to reionization. To do so, we use the theoretical halo mass functions at different \nredshifts \\citep{jenkins01}, convolved with the baryon-to-star conversion efficiency measured at $z=7$ from \nthe $\\textsf{FRU}$\\ run for $\\mbox{${M}_{\\rm vir}$}\\ge10^{7.5}\\,\\mbox{${M}_\\odot$}$ and \\citet{wise14} for $\\mbox{${M}_{\\rm vir}$}<10^{7.5}\\,\\mbox{${M}_\\odot$}$ \n(orange line in the middle panel of Figure~\\ref{fig:tau_es}), to derive the increase in the stellar mass density \nwith redshift. The number of escaping ionizing photons is then calculated by multiplying the number of \nphotons produced with the halo mass-dependent escape fraction based on our results and \\citet{wise14}, \nas (Figure~\\ref{fig:tau_es}, top panel)\n\\begin{equation}\n\\log \\mbox{$\\left$}=\\left\\{\n\\begin{array}{ll}\n -0.51 - 0.039\\,\\log \\mbox{${M}_{\\rm vir}$} & (\\log \\mbox{${M}_{\\rm vir}$} \\ge 8.5) \\\\\n 2.669 - 0.413\\,\\log \\mbox{${M}_{\\rm vir}$} & (7 \\le \\log \\mbox{${M}_{\\rm vir}$} < 8.5) \\\\\n -0.222 & (\\log \\mbox{${M}_{\\rm vir}$} < 7)\\\\\n\\end{array} \n\\right. .\n\\end{equation}\nWe neglect the contribution from rare massive halos with $\\mbox{${M}_{\\rm vir}$}>10^{12}\\mbox{${M}_\\odot$}$.\nFigure~\\ref{fig:tau_es} (orange line, bottom panel) shows that \nthe minihalos of $\\mbox{${M}_{\\rm vir}$}<10^7\\,\\mbox{${M}_\\odot$}$ can indeed provide enough photons \nto match $\\tau_e$ inferred from the CMB measurement. \nWhile the ionizing photons from $\\mbox{${M}_{\\rm vir}$}>10^7\\,\\mbox{${M}_\\odot$}$ only gives $\\tau_e=0.072$,\nthe additional photons arising from the minihalos augment the optical depth to 0.122.\nHowever, we note that this sensitively depends on the assumption\non the baryon-to-star conversion efficiency in the minihalos. For example, when the stellar \nmass-halo mass relation found in the $\\textsf{FRU}$\\ is extrapolated to the minihalos \n(blue line in the bottom panel), the optical depth for the entire halos is only $\\tau_e=0.073$.\nGiven that these minihalos would host a handful of star particles with $m_{\\rm star}\\sim10^2-10^3\\,\\mbox{${M}_\\odot$}$ \nin current numerical simulations, it is unclear how the mass resolution affects the conversion efficiency, \nand further investigations on star formation in the minihalos will be useful to better understand \ntheir relative role to the total ionizing budget.\n\n\n\nIn our simulation, we approximate that massive stars ($M>8\\mbox{${M}_\\odot$}$) evolve off and \nexplode after 10 Myr. We note that this is roughly the timescale \nof the delay between the peak of star formation and escape fraction.\nIn reality, the SN can emerge as early as $\\sim$ 3 Myr for a simple population \\citep{schaller92}. \nStellar winds, photo-ionization, and radiation pressure acting on electron and dust can come into play even earlier. \n\\citet{walch12} claims that a $10^4\\,\\mbox{${M}_\\odot$}$ molecular cloud of the radius 6.4 pc can be dispersed \non a 1-2 Myr timescale by the overpressure of \\mbox{{\\sc H ii}}\\ regions.\nMoreover, it is also plausible that the ionization front instabilities may lead to the higher escape probability \nof ionizing photons \\citep{whalen08b}.\nIf these mechanisms played a role in shaping the evolution of individual molecular clouds, the escape fraction \nmeasured in our simulations would have been higher than 14\\%. \nIn this regard, our photon number-weighted mean is likely to represent the minimum escape of ionizing photons. \nWhen a higher \\mbox{$\\left$}\\ of 30\\% is assumed for the star formation history in the $\\textsf{FRU}$\\ run, \ndark matter halos of $\\mbox{${M}_{\\rm vir}$}>10^8\\,\\mbox{${M}_\\odot$}$ alone can achieve $\\tau_e=0.076$,\nsuggesting that a more precise determination of the escape fraction is as equally important \nas resolving ultra-faint galaxies with $M_{\\rm 1500} > -13$. \nFuture studies focusing on the interplay between the feedback processes will shed more light \non the reionization history of the Universe.\n\n\n\n\n\n\n\\section{Conclusions}\n\nThe escape fraction of hydrogen ionizing photons is a critical ingredient in the theory of reionization. \nDespite its importance, only a handful of studies examined the escape fraction ($\\mbox{$f_{\\rm esc}$}$)\nof high-$z$ galaxies in a cosmological context \\citep{wise09,razoumov10,yajima11,Paardekooper13,wise14}.\nTo better understand the physics behind the escape of ionizing photons and quantify \\mbox{$f_{\\rm esc}$}, \nwe have carried out two zoomed-in cosmological radiation hydrodynamics simulations of \n$3.8\\times4.8\\times9.6$ Mpc$^3$ box (comoving) with \nthe \\mbox{{\\sc \\small Ramses}}\\ code \\citep{teyssier02,rosdahl13} with high spatial ($\\sim$ 4 pc, physical) and \nstellar mass resolution of 49 $\\mbox{${M}_\\odot$}$.\nBecause energy-based feedback from SN explosions suffers from the artificial\nradiative cooling if the cooling length is under-resolved, we have implemented a new \nmechanical feedback scheme that can approximate all stages of a SN explosion \nfrom the free expansion to snowplow phase. \nWith the physically based feedback model, \nwe have investigated the connection between the regulation of star formation and \ncorresponding evolution of the escape of ionizing photons. \nWe have also explored the relative importance of runaway OB stars to the escape fraction \nby comparing the twin simulations with ($\\textsf{FRU}$) and without ($\\textsf{FR}$) runaways.\nOur findings can be summarized as follows.\n\n\\begin{enumerate}\n\n\\item When a dense cloud begins to form a cluster of stars, the escape fraction is negligible. \nAs energetic explosions by massive stars follow after $\\sim$ 10 Myr, it blows the star forming gas away,\nincreasing the {\\it instantaneous} escape fraction (\\mbox{$f_{\\rm esc}$}) to \\gtrsim10\\%. Although \\mbox{$f_{\\rm esc}$}\\ is kept high in this phase,\nsubsequent star formation is markedly suppressed, \nand only a small number of photons escapes from their host dark matter halo (Figure~\\ref{fig:ex}). \nThis time delay between the peak of star formation and the escape fraction is crucial in predicting \nthe actual escape probability of ionizing photons. While the instantaneous \\mbox{$f_{\\rm esc}$}\\ can easily \nattain $\\gtrsim30\\%$ in halos of mass $\\mbox{${M}_{\\rm vir}$} \\ge 10^8\\,\\mbox{${M}_\\odot$}$ on average (Figure~\\ref{fig:fesc_stat}), \nthe photon number-weighted mean of the escape fraction (\\mbox{$\\left$}) is found to be 11.4\\% (Figure~\\ref{fig:fesc_wei}).\n\n\\item \\mbox{$f_{\\rm esc}$}\\ tends to be higher in less massive halos and at lower redshift for a give halo mass (Figure~\\ref{fig:fesc_stat}).\nThis is essentially because less dense and smaller galaxies are more susceptible to SN explosions.\nHowever, the photon production rate-averaged escape fractions show no clear dependence \non halo mass and redshift, again implying that the interplay between star formation and the delay in the onset of \nnegative feedback is more important in determining the actual escape probability. \n\n\\item Absorption of ionizing photons by neutral hydrogen in the ISM is significant (Figure~\\ref{fig:tau}). For galaxies \nwith a low escape fraction ($\\mbox{$f_{\\rm esc}$}<10\\%$), the effective optical depth by the gas within 100 pc \nfrom each young star particles is found to be $\\tau_{\\rm eff,100pc}\\sim 1.9-3.8$ at $z\\sim8$.\nThe nearby neutral gas alone can reduce the number of ionizing photons by 7--45 in this case, \ndemonstrating the importance of properly resolving the ISM to predict a more accurate escape fraction.\n\n\\item Our physically based SN feedback effectively regulates star formation. \nOnly 0.1\\% to 10\\% of the baryons are converted into stars in galaxies at $z=7$ (Figure~\\ref{fig:mstar}).\nThe energetic explosions sometimes completely shut down star formation when galaxies are small.\nThe baryon-to-star conversion ratio is smaller in less massive halos. \nConsequently, halos of different masses contribute comparably to the total number \nof ionizing photons escaped by $z=7$ (Figure~\\ref{fig:fesc_wei}).\n\n\n\\item Inclusion of runaway OB stars increases the escape fraction to $\\mbox{$\\left$}=13.8\\%$ from 11.4\\% \n(Figure~\\ref{fig:fesc_runaway}). Since the runaway OB stars tend to move to lower density regions, \nphotons from them have a higher chance of escaping. Moreover, as the runaway OB stars explode in a less \ndense medium, feedback from SNe becomes more effective, resulting in reduced star formation in halos $\\mbox{${M}_{\\rm vir}$} \\ge 10^9\\,\\mbox{${M}_\\odot$}$, \ncompared with the $\\textsf{FR}$\\ run. Because of the balance between the increase in \\mbox{$\\left$}\\ and the decrease \nin star formation, the total number of ionizing photons escaped by $z=7$ is found to be comparable in\nthe two runs.\n\n\\item \nA sufficient amount of photons escape from the dark \nmatter halos with $\\mbox{${M}_{\\rm vir}$}\\ge10^8\\,\\mbox{${M}_\\odot$}$ to keep the universe ionized at $z\\le9$. \nThe simulated UV luminosity function with a faint end slope of -1.9 is consistent with observations.\n\\end{enumerate}\n\n\n\n\n\n\n\n\\acknowledgements{\nWe thank an anonymous referee for constructive suggestions that improved this paper.\nWe are grateful to Julien Devriendt, Sam Geen, Chang-Goo Kim, Eve Ostriker, Adrianne Slyz,\nand John Wise for insightful discussions.\nSpecial thanks go to Romain Teyssier and Joakim Rosdahl for sharing their radiation \nhydrodynamics code with us. Computing resources were provided in part by the NASA High-\nEnd Computing (HEC) Program through the NASA Advanced\nSupercomputing (NAS) Division at Ames Research Center and in part by \nHorizon-UK program through DiRAC-2 facilities. \nThe research is supported by NSF grant AST-1108700 \nand NASA grant NNX12AF91G.\n}\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Acknowledgement}\nWe would like to thank J.Y. Kim for usefull discussions.\nThis work was supported in part by the Basic Science Research Institute \nProgram, Minstry of Education, Project NOs. BSRI-98-2441 and \nBSRI-98-2413. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdqan b/data_all_eng_slimpj/shuffled/split2/finalzzdqan new file mode 100644 index 0000000000000000000000000000000000000000..f215519511726b75327aee8997c056509ee9043e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdqan @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{thintro}\n\n In quantum transport, current correlation function\ncontains more information than average current\n \\cite{Bla001,Imr02,Bee0337,Naz03}.\nExperiments often measure the power spectrum,\nthe Fourier transformation of correlation function\n\\cite{Deb03203,Del09208,Bas10166801,Bas12046802,Del18041412}.\nIn general, nonequilibrium noise spectrum of transport current\nneither is asymmetric and nor satisfies\nthe detail--balance relation\n\\cite{Eng04136602,Agu001986,\nNak19134403,Ent07193308,Rot09075307,Mao21014104}.\nMoreover, mesoscopic systems with discrete energy levels\nexhibit strong Coulomb interaction\nand the contacting electrodes in general\ninduce memory effect \\cite{Bla001,Imr02,Bee0337,Naz03}.\nTheoretical methods that are practical to general mesoscopic systems,\nwith Coulomb interaction and memory effects on quantum transport,\nwould be needed.\n\n As real--time dynamics is concerned,\nthe quantum master equation approach is the most popular.\nJin--Zheng--Yan established the exact fermionic\nhierarchical equation of motion approach \\cite{Jin08234703}.\nThis nonperturbative\ntheory has been widely used in studying nanostructures\nwith strong correlations including the Kondo problems\n\\cite{Zhe09164708,Li12266403,Wan13035129,Che15033009}.\nRecently, Yan's group further developed the dissipaton equation of motion (DEOM)\ntheory \\cite{Yan14054105,Jin15234108,Yan16110306,Jin20235144}.\nThe underlying algebra addresses the hybrid bath dynamics.\nThe current correlation function can now be evaluated,\neven in the Kondo regime \\cite{Jin15234108,Mao21014104}.\nNote also that\nZhang's group established an exact fermionic master equation,\nbut only for noninteracting systems \\cite{Tu08235311,Jin10083013,Yan14115411}.\n\n\n\n\n\nIn this work, we extend the convention\ntime-nonlocal master equation (TNL-ME)\nto cover an efficient evaluation of\ntransport current noise spectrum.\nThe key step is to identify the underlying\ncurrent related density operators.\nThis converts TNL-ME\ninto a set of three time-local equation-of-motion (TL-EOM) formalism.\nThe latter has the advantage in such as the initial value problems\nand nonequilibrium non-Markovian correlation functions\n \\cite{Jin16083038}.\nThe underlying algebra here is closely related to the DEOM theory\n\\cite{Yan14054105,Jin15234108,Yan16110306,Jin20235144}.\nTL-EOM provides not only real-time dynamics,\n but also analytical formulae for both transport current and noise spectrum.\n\n\n\n The remainder of this paper is organized as follows.\nIn \\Sec{thNMK-ME}, we introduce the transport model and\n the energy-dispersed TL-EOM formalism.\nIn \\Sec{thcurr}, combining the TL-EOM and\ndissipaton decomposition technique,\nwe present an efficient method for calculating the current noise spectrum.\nThe time-dependent current formula is first given in \\Sec{thsubcurr}.\n We then derive the current correlation function\n and straightforwardly obtain the analytical formula of noise\n spectrum in \\Sec{thsubcurrcf}\n and \\Sec{thsubnoise}, respectively.\n\nThe detail derivation is given in \\App{thappsw}.\nWe further give discussions and remarks on the resulting noise spectrum formula\nin \\Sec{thRemarks}.\nFor illustration, we apply the present method to demonstrate the quantum noise spectra\nof the transport through interacting double quantum dots in \\Sec{thnum}.\n The numerical results are further compared with\nthe accurate ones based on DEOM theory.\nFinally, we conclude this work with \\Sec{thsum}.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Non-Markovian master equation formalisms}\n\n\\label{thNMK-ME}\n\n\\subsection{Model Hamiltonian}\n\nConsider the electron transport through the central nanostructure system\n contacted by the two electrode reservoirs (left $\\alpha = {\\rm L}$ and\n right $\\alpha = {\\rm R}$).\nThe total Hamiltonian reads\n\\begin{align}\\label{Htot0}\nH_{\\T}\\!=\\!H_{\\tS}\\!+\\!\\sum_{\\alpha k}\\varepsilon_{\\alpha k}\n c^{\\dg}_{\\alpha k} c_{\\alpha k}\\!+\\!\\sum_{\\alpha k}\\!\\left(t_{\\alpha u k}a^\\dg_u\n c_{\\alpha k} \\!+\\!{\\rm H.c.}\\right).\n\\end{align}\nThe system Hamiltonian $H_{\\tS}$ includes electron-electron interaction,\ngiven in terms of local electron creation $ a^{\\dg}_{u}$\n(annihilation $ a_{u}$) operators of the spin-orbit state $u$.\n The second term describes the two electrodes ($H_{\\B}$) modeled by the\n the reservoirs of noninteracting electrons\nand $c^{\\dg}_{\\alpha k}$ ($c_{\\alpha k}$) denotes the creation (annihilation) operator\nof electron in the $\\alpha$-reservoir with momentum $k$ and\nenergy $ \\varepsilon_{\\alpha k}$.\nThe last term is the standard tunneling Hamiltonian between the system and the electrodes\nwith the tunneling coefficient $t_{\\alpha u k}$.\nThroughout this work, we adopt the unit of $e=\\hbar=1$.\n\n\n\nFor convenience, we reexpress the tunneling Hamiltonian as\n\\be\\label{Hsb1}\n H'\\!=\\!\\sum_{\\alpha u }\\left( a^{+}_{u} F^-_{\\alpha u}\n + F^+_{\\alpha u} a^{-}_{u} \\right)\\!=\\!\\sum_{\\alpha u \\sigma} a^{\\bar\\sigma}_{u}\n \\wti F^{\\sigma}_{\\alpha u},\n\\ee\nwhere $ F^-_{\\alpha u}\n=\\sum_k t_{\\alpha u k} c_{\\alpha k}=( F^+_{\\alpha u})^\\dg$\nand\n $\\wti F^{\\sigma}_{\\alpha u} \\equiv \\bar\\sigma F^{\\sigma}_{\\alpha u}$,\nwith $\\sigma =+,-$ ($\\bar\\sigma$ is the opposite sign to $\\sigma$).\nAs well-known, the effect of the reservoirs on the transient dynamics of the\ncentral system is characterized\nby the bath correlation function,\n \\begin{align}\\label{ct}\n C^{(\\sigma)}_{\\alpha uv} (t )\n & \\!=\\! \\la F^\\sigma_{\\alpha u} (t) F^{\\bar\\sigma}_{\\alpha v} (0) \\ra_{\\B}\n \\!= \\!\\int_{-\\infty}^{\\infty}\\!\\!\\frac{{\\mathrm d} E}{2\\pi}\\,e^{\\sigma i E t}\n \\Gamma^{\\sigma}_{\\alpha uv}(E),\n \\end{align}\n where $\\la \\cdots\\ra_{\\B}$ stands for the statistical average\nover the bath (electron reservoirs) in thermal equilibrium.\n \n The second identity in \\Eq{ct} arises from the\n the bath correlation function related to\n the hybridization spectral density\n $J_{\\alpha u v}(E)\n\\equiv2\\pi\\sum_k t_{\\alpha u k}t^\\ast_{\\alpha v k}\\delta(E-\\varepsilon_{\\alpha k})\n=J^\\ast_{\\alpha vu}(E)$.\nHere, we introduced\n\\be\\label{cw-real}\n\\Gamma^{\\sigma}_{\\alpha uv}(E)\\equiv n^{\\sigma}_\\alpha(E) J^{\\sigma}_{\\alpha uv}(E),\n\\ee\nwith $J^+_{\\alpha vu}(E) = J^-_{\\alpha uv}(E) = J_{\\alpha uv}(E)$,\n$n^{+}_\\alpha(E)$ the Fermi distribution\nfunction of\n$\\alpha$-reservoir and $n^{-}_\\alpha(E)=1-n^{+}_\\alpha(E)$.\n\n\n\nFor later use, we introduce the dissipaton decomposition for the hybridizing bath \\cite{Yan14054105}\nin the energy-domain,\n\\begin{align}\\label{dissp}\n \\wti F^{\\sigma}_{\\alpha u} \\equiv \\bar\\sigma F^{\\sigma}_{\\alpha u}\n \\equiv \\!\\int_{-\\infty}^{\\infty}\\!\\!\\frac{{\\mathrm d} E}{2\\pi}\n f^{\\sigma}_{\\alpha u}(E).\n\\end{align}\nThe so-called dissipatons \\{$f^{\\sigma}_{\\alpha u}(E)$ \\} satisfy\n\\begin{align*}\\label{all_notation}\n\\la f^{\\sigma}_{\\alpha u }(E,t)f^{\\bar\\sigma}_{\\beta v }(E',0)\\ra\n = - \\delta_{\\alpha\\beta}\\delta(E-E')\n e^{\\sigma i E t}\\Gamma^\\sigma_{\\alpha u v}(E).\n\\end{align*}\n It is easy to verify that the above decomposition preserves\nthe bath correlation function given by \\Eq{ct}.\n\n\n\n\n\n\n\n\n\\subsection{TNL-ME and TL-EOM}\n\\label{thnmkme}\n\nLet us outline the\nTNL-ME and the equivalent energy-dispersed TL-EOM for weak system-reservoir coupling.\nIt is well-known that the primary central system\n is described by the reduced density operator, $\\rho(t)\\equiv{\\rm tr}_{\\B}[\\rho_{\\T }(t)]$,\ni.e., the partial trace of the total density operator $\\rho_{\\T}$ over the bath\nspace. The corresponding\ndynamics is determined by the TNL-ME,\n $\\dot\\rho(t) = -i[H_{\\tS},\\rho(t)]\n - \\int_{t_0}^t\\!\\!{\\mathrm d}\\tau\n \\Sigma(t-\\tau)\\rho(\\tau)$.\n It can describe the non-Markovian dynamics for\nthe self-energy $\\Sigma(t-\\tau)$ containing the memory effect.\nAssuming weak system-bath coupling and performing Born but without\nMarkovian approximation,\nthe self-energy for the expansion up to second-order\nof the tunneling Hamiltonian is expressed as\n$\\Sigma(t-\\tau)=\\big\\la{\\cal L}'(t) e^{-i{\\cal L}_{\\tS}(t-\\tau)}{\\cal L}'(\\tau)\n \\big\\ra_{\\B}$\nin the $H_{\\B}$-interaction picture.\nThe resulted TNL-ME is explicitly given by:\n\\be\\label{TNL-ME}\n\\dot\\rho(t)= -i{\\cal L}_{\\tS}\\rho(t) -i\\sum_{\\alpha u\\sigma}\n \\big[a^{\\bar\\sigma}_u, \\varrho^{\\sigma}_{\\alpha u}(t)\\big],\n \\ee\n\nwith ${\\cal L}_{\\tS} \\hat O=[H_{\\tS},\\hat O]$ and\n\\be\\label{TNL-ME1}\n \\varrho^{\\sigma}_{\\alpha u}(t)\n= -i\\int_{t_0}^t\\!\\!{\\mathrm d}\\tau\\, e^{-i{\\cal L}_{\\tS} (t-\\tau)}\n {\\cal C}^{(\\sigma)}_{\\alpha u}(t-\\tau) \\rho(\\tau),\n\\ee\nwhere\n\\be\\label{calCt}\n {\\cal C}^{(\\sigma)}_{\\alpha u}(t) \\hat O\n \\equiv \\sum_{v} \\big[C^{(\\sigma)}_{\\alpha uv}(t)a^\\sigma_{v}\\hat O\n - C^{(\\bar\\sigma)\\ast}_{\\alpha uv}(t)\\hat O a^{\\sigma}_v\\big].\n \\ee\nThis depends on the bath correlation function, \\Eq{ct}.\nIn \\Eq{TNL-ME}, the first term describes the intrinsic coherent dynamics\nand the second term\ndepicts the non-Markovian dissipative effect of the coupled reservoirs.\n\n Let $\\rho(t)\\equiv \\Pi(t,t_0)\\rho(t_0)$ be\nthe formal solution to \\Eq{TNL-ME}.\nNote that $\\Pi(t,t_0)\\neq\\Pi(t,t_1)\\Pi(t_1,t_0)$.\nIn other words, the conventional quantum regression theorem\nis not directly applicable for\nthe calculation of the correlation functions.\n Alternatively, with the introduction of\n${\\bm\\rho}(t)\n\\!\\equiv\\!\\left[\\rho(t),\\rho^{\\pm}_{\\alpha u}(E,t) \\right]^T$,\n the TNL-ME (\\ref{TNL-ME}), with \\Eq{TNL-ME1},\n can be converted to TL-EOM \\cite{Jin16083038}\n \\bsube\\label{TL-EOM}\n \\begin{align}\n \\dot\\rho(t)\n &=\\!-i{\\cal L}_{\\tS}\\rho(t)-\\!i\\!\\sum_{\\alpha u\\sigma}\\!\\int\\! \\frac{{\\rm d}E}{2\\pi}\n \\big[ a^{\\bar\\sigma}_u,\\rho^{\\sigma}_{\\alpha u}(E,t)\\big],\n\\label{rho0t}\n\\\\\n\\dot\\rho^{\\sigma}_{\\alpha u}(E,t)\n&=\\!-i({\\cal L}_{\\tS}\\!-\\!\\sigma E)\\rho^{\\sigma}_{\\alpha u}(E,t)\n -i{\\cal C}^{(\\sigma)}_{\\alpha u}(E) \\rho(t),\n \\label{rho1t}\n \\end{align}\n\\esube\nwhere\n ${\\cal C}^{(\\sigma)}_{\\alpha u}(E)=\\int\\! {\\rm d}t\\, e^{-\\sigma iEt} {\\cal C}^{(\\sigma)}_{\\alpha u}(t)$;\n cf.\\,\\Eq{calCt},\n \\be\\label{calCw}\n {\\cal C}^{(\\sigma)}_{\\alpha u}(E) \\hat O\\equiv\\sum_v \\left[\\Gamma^{(\\sigma)}_{\\alpha u v}(E)\n a^{\\sigma}_v \\hat O-\\hat O \\Gamma^{(\\bar\\sigma)\\ast}_{\\alpha u v}(E)a^{\\sigma}_v\\right].\n \\ee\nAs implied in \\Eq{TNL-ME1}, we have\n \\begin{align}\\label{varrho-phi}\n \\varrho^{\\sigma}_{\\alpha u}(t) = \\int\\frac{{\\rm d}E}{2\\pi}\\rho^{\\sigma}_{\\alpha u}(E,t).\n \\end{align}\n\n\n Equation (\\ref{TL-EOM}) can be summarized as\n$\\dot {\\bm\\rho}(t)={\\bf{\\Lambda}}\\bm{\\rho}(t)$\nwhich leads to the solution of ${\\bm\\rho}(t)=\\bm\\Pi(t,t_0)\\bm{\\rho}(t_0)$ with\n$\\bm\\Pi(t,t_0)=e^{{\\bm\\Lambda}(t-t_0)}$.\nThe TL-EOM space propagator satisfies\nthe time translation invariance, i.e., $\\bm\\Pi(t,t_0)=\n\\bm\\Pi(t,\\tau)\\bm\\Pi(\\tau,t_0)$.\nIn other words, the TL-EOM \\Eq{TL-EOM} is a mathematical\nisomorphism of the conventional ``Schr\\\"{o}dinger equation''\nand applicable to any physically supported initial state $\\rho_{\\T}(t_0)$.\nIn particular, the total system-plus-bath composite density operator $\\rho_{\\T}(t)$ maps to\n ${\\bm{\\rho}}(t)$, including the nonequilibrium steady state mapping,\n $\\rho^{\\rm st}_{\\T} \\rightarrow {\\bm{\\rho}}^{\\rm st}$.\nThis protocol can be extended to system correlation functions and\ncurrent correlation functions.\nThis is the advantage of TL-EOM (\\ref{TL-EOM}) over TNL-ME (\\ref{TNL-ME}).\nThe details are as follows.\n\n\n\n\n\n\n\\section{Current and noise spectrum}\n\\label{thcurr}\n\n\\subsection{The current formula}\n\\label{thsubcurr}\n\nFirst, we identify\n$\\rho^{\\sigma}_{\\alpha u}( E,t)$ in \\Eq{TL-EOM}\nbeing the current-related density operator.\nBy the definition, the lead-specified current operator is\n$\\hat I_{\\alpha}=-{\\rm d}\\hat N_{\\alpha}\/{\\rm d}t=-i[\\hat N_{\\alpha},H']$,\nwith $\\hat N_{\\alpha}\\equiv\\sum_k c^\\dg_{\\alpha k}c_{\\alpha k}$\nbeing the number operator.\nThe tunneling Hamiltonian $H'$ is given by \\Eq{Hsb1} with \\Eq{dissp}.\nWe immediately obtain\n\\begin{align}\\label{currI_hat}\n \\hat I_{\\alpha}\n&= -i \\sum_{\\sigma u} \\ti a^{\\sigma}_{ u}\n {\\wti F}^{\\bar\\sigma}_{\\alpha u}\n =-i\\sum_{\\sigma u}\\int\\! \\frac{{\\mathrm d} E}{2\\pi} \\ti a^{\\sigma}_u\n f^{\\bar\\sigma}_{\\alpha u}( E),\n\\end{align}\nwhere $\\ti a^{ \\sigma}_{ u}\\equiv \\sigma a^{\\sigma}_{ u}$.\nThe average current reads\n\\begin{align}\n I_{\\alpha}(t)\n &\\!=\\!{\\rm Tr}[\\hat I_{\\alpha}\\rho_{\\T}(t)]\n \\!=\\!-i\\!\\sum_{\\sigma u}\\! \\int\\! \\frac{{\\mathrm d} E}{2\\pi}{\\rm tr}_{\\rm s}\n [ \\ti a^{\\sigma}_{ u}\\rho^{\\bar\\sigma}_{\\alpha u }( E,t)],\n\\label{curr-ddo}\n\\end{align}\nwhere\n\\begin{align}\n\\rho^{\\sigma}_{\\alpha u}( E,t)\n &\\equiv\n {\\rm tr}_{\\B}\\big[f^{\\sigma}_{\\alpha u}( E)\\rho_{\\T}(t)\\big].\n\\label{phi1}\n\\end{align}\nOn the other hand,\nperforming the bath subspace trace (${\\rm tr}_{\\B}$) over\n$\\dot{\\rho}_{\\T}(t)=-i[H_{\\tS}+H_{\\B}+H',\\rho_{\\T}(t)]$,\n we obtain immediately \\Eq{rho0t}, where $\\rho^{\\sigma}_{\\alpha u}( E,t)$\n is the right given by \\Eq{phi1}.\n In other words,\n TL-EOM (\\ref{TL-EOM}) provides not only the real-time dynamics, but also transient current,\n\\Eq{curr-ddo}, with \\Eqs{varrho-phi} and (\\ref{TNL-ME1}),\n\\be\\label{curr-exp}\n I_{\\alpha}(t)\n=-\\!\\sum_{\\sigma u} \\!\\!\\int_{t_0}^t\\!\\! {\\mathrm d}\\tau\\, {\\rm tr}_{\\rm s}\n [ \\ti a^{\\sigma}_{ u}e^{-i{\\cal L}_{\\tS} (t-\\tau)}{\\cal C}^{(\\bar\\sigma)}_{\\alpha u}(t-\\tau) \\rho(\\tau) ].\n\\ee\nHere, we set $\\rho^{\\pm}( E,t_0\\!\\rightarrow\\!-\\infty)=0$ for the initially\n decoupled system and reservoir.\n\n\n\n\n\n\n\\subsection{Current correlation function}\n\\label{thsubcurrcf}\n\nTurn to the lead-specified steady-state current correlation function,\n\\begin{align}\\label{CorrI}\n \\la \\hat I_{\\alpha}(t)\\hat I_{\\alpha'}\\!(0)\\ra\n &={\\rm Tr}\\big[\\hat I_{\\alpha} \\rho_{\\T}(t; {\\alpha'})\\big],\n\\end{align}\nwith\n\\be\n\\rho_{\\T}(t; {\\alpha'})= e^{-i{\\cal L}_{\\T}t} (\\hat I_{\\alpha'}\\rho^{\\rm st}_{\\T})\n\\equiv e^{-i{\\cal L}_{\\T}t} \\rho_{\\T}(0; {\\alpha'}).\n\\ee\nIts TL-EOM correspondence reads\n\\be\n\\bm\\rho (t; {\\alpha'})= e^{{\\bm\\Lambda}t}(\\hat I_{\\alpha'} \\bm\\rho^{\\rm st} )\n\\equiv e^{{\\bm\\Lambda}t}{\\bm\\rho}(0; {\\alpha'}).\n\\ee\nHere,\n$\\bm{\\rho}(t;\\alpha')\n\\!\\equiv\\!\\left[\\rho(t;\\alpha'),\\rho^{\\pm}_{\\alpha u}(E,t;\\alpha')\\right]^T$,\nwith the propagator being defined in \\Eq{TL-EOM}\nand the initial values via \\Eq{curr-ddo} being\n\\bsube\\label{vecI02}\n\\begin{align}\n\\label{rho0alpha}\n&\\rho(0;\\alpha')\n \\equiv {\\rm tr}_{\\B} \\big(\\hat I_{\\alpha'}\\rho^{\\rm st}_{\\T}\\big)\n=-i\\!\\sum_{\\sigma u}\\! \\int\\! \\frac{{\\rm d} E}{2\\pi}\n \\ti a^{\\bar\\sigma}_{ u}\\bar\\rho^{\\sigma}_{\\alpha' u }(E) ,\n\\\\\n&\\rho^{\\sigma}_{\\alpha u}(E,0;\\alpha')\n\\equiv{\\rm tr}_{\\B}\\big[f^{\\sigma}_{\\alpha u}( E)\n\\hat I_{\\alpha'}\\rho^{\\rm st}_{\\T}\\big]\n\\nl&\\hspace{5.5 em}\n= -i\\delta_{\\alpha\\alpha'}\\!\\sum_v \\Gamma^{\\sigma}_{\\alpha u v}(E)\n \\ti a^{\\sigma}_{ v}\\bar\\rho,\n\\label{phiI0}\n\\end{align}\n\\esube\nwhere $\\bar\\rho\\equiv \\rho^{\\rm st}$ and\n$\\bar\\rho^{\\sigma}_{\\alpha' u }(E)\n\\equiv [\\rho^{\\sigma}_{\\alpha' u }(E)]^{\\rm st}$.\nWe can then evaluate \\Eq{CorrI} as\n\\begin{align}\\label{CorrI1}\n \\la \\hat I_{\\alpha}(t)\\hat I_{\\alpha'}\\!(0)\\ra\n&=-i\\sum_{\\sigma u}\\!\\int\\! \\frac{{\\mathrm d} E}{2\\pi}{\\rm tr}_{\\rm s}\n [ \\ti a^{\\sigma}_{ u}\\rho^{\\bar\\sigma}_{\\alpha u }( E,t;\\alpha')].\n\\end{align}\n\n\n\\subsection{Quantum noise spectrum}\n\\label{thsubnoise}\n\nThe lead--specified shot noise spectrum is given by\n\\be\\label{Sw0}\n S_{\\alpha\\alpha'}(\\omega)=\\int_{-\\infty}^{\\infty}\\!\\!{\\rm d}t\\,\n e^{i\\omega t} \\La \\delta{\\hat I}_\\alpha(t)\n \\delta{\\hat I}_{\\alpha'}(0)\\Ra,\n\\ee\nwith $\\delta{\\hat I}_\\alpha(t)\\equiv{\\hat I}_\\alpha(t)-I^{\\rm st}_{\\alpha}$;\ni.e.,\n\\be\\label{corr-curr}\n \\La \\delta{\\hat I}_\\alpha(t)\\delta{\\hat I}_{\\alpha'}(0)\\Ra\n=\\La {\\hat I}_\\alpha(t){\\hat I}_{\\alpha'}(0)\\Ra\n -I^{\\rm st}_{\\alpha}I^{\\rm st}_{\\alpha'}.\n\\ee\nThe steady--state current,\n$I^{\\rm st}_{\\alpha}\\equiv {\\rm Tr}(\\hat I_{\\alpha}\\bar\\rho_{\\T})$,\nsatisfies\n$I^{\\rm st}_{\\rm L}=-I^{\\rm st}_{\\rm R}$.\n To proceed, we apply the initial values, \\Eq{vecI02},\nand express \\Eq{CorrI1} in terms of\n\\begin{align}\n &\\la \\hat I_{\\alpha}(t)\\hat I_{\\alpha'}\\!(0)\\ra\n=\\delta_{\\alpha\\alpha'}\\!\\sum_{\\sigma u v}\\!\n {\\rm tr}_{\\rm s}[a^{\\sigma}_{ u}e^{-i{\\cal L}_{\\tS} t}\n C^{(\\bar\\sigma)}_{\\alpha uv}(t)a^{\\bar\\sigma}_{v} \\bar\\rho]\n\\nl&\\quad\n -\\sum_{\\sigma u} \\!\\int_{t_0}^t\\! {\\mathrm d}\\tau\\,\n {\\rm tr}_{\\rm s}[\\ti a^{\\sigma}_{ u} e^{-i{\\cal L}_{\\tS} (t-\\tau)}\n {\\cal C}^{(\\bar\\sigma)}_{\\alpha u}(t-\\tau) \\rho(\\tau;\\alpha') ].\n\\label{curr-curr}\n\\end{align}\nAs detailed in Appendix,\nthe first term describes the contribution from \\Eq{phiI0},\nthe second term involves $\\rho(\\tau;\\alpha')$,\nwith the initial value of \\Eq{rho0alpha}.\n\n\n\n To resolve $\\rho(\\tau;\\alpha')$,\none can exploit either TNL-ME (\\ref{TNL-ME})\nor TL-EOM (\\ref{TL-EOM}).\nThe related resolvent reads\n\\be\n \\Pi(\\omega)=[i({\\cal L}_{\\tS}-\\omega)+\\Sigma(\\omega)]^{-1},\n\\ee\nwith\n $\\Sigma(\\omega)=\\sum_{\\alpha}\n\\big[{\\cal J}^{<}_{\\alpha}(\\omega)-{\\cal J}^{>}_{\\alpha}(\\omega)\\big]$,\n\\bsube\\label{caljomega}\n \\begin{align}\n{\\cal J}^{>}_{\\alpha}(\\omega)\\hat O&\\equiv\\!-\\!\\sum_{\\sigma u}\\ti a^{\\bar\\sigma}_{ u}\n\\big[{\\cal C}^{(\\sigma)}_{\\alpha u}(\\omega-{\\cal L}_{\\tS})\\hat O\\big],\n\\\\\n{\\cal J}^{<}_{\\alpha}(\\omega)\\hat O&\\equiv\\!-\\!\\sum_{\\sigma u}\n\\big[{\\cal C}^{( \\sigma)}_{\\alpha u}(\\omega-{\\cal L}_{\\tS})\\hat O\\big]\\ti a^{\\bar\\sigma}_{ u},\n\\end{align}\n\\esube\nwhere [cf.\\,\\Eq{calCt}]\n\\begin{align}\\label{calCw}\n {\\cal C}^{(\\sigma)}_{\\alpha u}(\\omega) \\hat O\n \\!=\\!\\! \\sum_{v}\n \\! \\big[C^{(\\sigma)}_{\\alpha uv}(\\omega)(a^\\sigma_{v}\\!\\hat O)\n \\!- C^{(\\bar\\sigma)\\ast}_{\\alpha uv}( -\\omega)(\\hat O a^{\\sigma}_v)\\big],\n \\end{align}\nand $C^{(\\sigma)}_{\\alpha u v}( \\omega)\\equiv\n\\int_{0}^{\\infty}\\!dt\\\ne^{i\\omega t}C^{(\\sigma)}_{\\alpha u v}(t)$.\nDenote further\n\\bsube\\label{calwomega}\n \\begin{align}\n{\\cal W}^{>}_{\\alpha}(\\omega)\\hat O&\\equiv\\sum_{ \\sigma uv}\n\\big[ \\ti a^{\\bar\\sigma}_{u},C^{(\\sigma)}_{\\alpha uv }(\\omega-{\\cal L}_{\\tS})\n (a^\\sigma_{v}\\hat O) \\big],\n\\\\\n{\\cal W}^{<}_{\\alpha}(\\omega)\\hat O&\\equiv\n \\sum_{ \\sigma uv} \\big[ \\ti a^{\\bar\\sigma}_{u},C^{(\\bar\\sigma)\\ast}_{\\alpha uv }({\\cal L}_{\\tS}-\\omega)\n (\\hat O a^{\\sigma}_{v})\\big].\n\\end{align}\n\\esube\nNote that\n\\bsube\\label{caljw}\n\\begin{align}\n \\big[{\\cal W}^{<}_{\\alpha}(\\omega)\\hat O\\big]^\\dg\n&={\\cal W}^{>}_{\\alpha}(-\\omega)\\hat O^\\dg,\n\\\\\n \\big[{\\cal J}^{<}_{\\alpha}(\\omega)\\hat O\\big]^\\dg\n&={\\cal J}^{>}_{\\alpha}(-\\omega)\\hat O^\\dg.\n\\end{align}\n\\esube\nMoreover,\nwe have $I^{\\rm st}_{\\alpha}\n={\\rm tr}_{\\tS}\\big[{\\cal J}^{>}_{\\alpha}(0)\\bar{\\rho}\\big]\n={\\rm tr}_{\\tS}\\big[{\\cal J}^{<}_{\\alpha}(0)\\bar{\\rho}\\big]$,\nfor the steady current,\nas inferred from \\Eq{curr-exp}.\n\n\n\nFinally, we obtain \\Eq{Sw0},\nwith \\Eqs{corr-curr} and (\\ref{curr-curr}),\nthe expression\n(see Appendix for the derivations),\n\\begin{align}\\label{Sw}\nS_{\\alpha\\alpha'}(\\omega)\n &= {\\rm tr}_{\\rm s}\\Big\\{{\\cal J}^{>}_{\\alpha}(\\omega)\n \\Pi(\\omega)\\big[{\\cal J}^{>}_{\\alpha'}(0)\n + {\\cal W}^{>}_{\\alpha'}(\\omega)\\big]\\bar\\rho\n\\nl&\\qquad\n+{\\cal J}^{<}_{\\alpha'}(-\\omega)\\Pi(-\\omega)\\big[{\\cal J}^{<}_{\\alpha}(0) +\n {\\cal W}^{<}_{\\alpha}(-\\omega) \\big]\\bar\\rho\\Big\\}\n\\nl&\\quad\n +2\\delta_{\\alpha'\\alpha}{\\rm Re}\\! \\sum_{\\sigma u v}\n {\\rm tr}_{\\rm s}\\big[ a^{\\bar\\sigma}_u C^{(\\sigma)}_{\\alpha uv }(\\omega-{\\cal L}_{\\tS})\n a^{\\sigma}_{v}{\\bar\\rho}\n \\big].\n \\end{align}\nThis is the key result of this paper,\nwith $\\omega>0$ and $<0$\ncorresponding to energy\nabsorption and emission processes,\nrespectively \\cite{Eng04136602,Agu001986,Jin15234108,Nak19134403}.\n\n\n\n\n\n\n\n\n\\subsection{Discussions and remarks}\n\\label{thRemarks}\n\n\n\n\n\nIn mesoscopic quantum transport,\nthe charge conservation is about\n$-\\dot{Q}(t)=I_{\\rm L}(t)+I_{\\rm R}(t)\\equiv I_{\\rm dis}(t)$,\nwith the displacement current arising from the change of the charge $Q(t)$\nin the central system. The corresponding\nfluctuation spectrum,\n$S_{\\rm c}(\\omega)=\\int_{-\\infty}^{\\infty} \\!dt\\,\n e^{i\\omega t} \\La \\delta{\\dot{Q}(t)} \\delta{\\dot{Q}(0)}\\Ra$,\n can then be evaluated via \\cite{Jin11053704}\n\\begin{align}\\label{Scw}\nS_{\\rm c}(\\omega)&=S_{\\rm LL}(\\omega)+S_{\\rm RR}(\\omega)+2{\\rm Re}[S_{\\rm LR}(\\omega)].\n\\end{align}\n For auto-correlation noise spectrum,\n\\Eq{Sw} with $\\alpha'=\\alpha$,\nwe have [\\cf\\Eq{caljw}]\n\\begin{align}\\label{Sw-auto}\nS_{\\alpha\\alpha}(\\omega)\n &=2\\,{\\rm Re}\\,{\\rm tr}_{\\rm s}\\big\\{{\\cal J}^{>}_{\\alpha}(\\omega) \\Pi(\\omega)\\big[{\\cal J}^{>}_{\\alpha}(0)\n + {\\cal W}^{>}_{\\alpha}(\\omega)\\big]\\bar\\rho\\big\\}\n\\nl&\\quad\n +2\\,{\\rm Re}\\sum_{\\sigma u v}\n {\\rm tr}_{\\rm s}\\big[ a^{\\bar\\sigma}_u C^{(\\sigma)}_{\\alpha uv }(\\omega-{\\cal L}_{\\tS})\n a^{\\sigma}_{v}{\\bar\\rho}\\big].\n \\end{align}\n \nAlternatively, $S_{\\rm c}(\\omega)$ can also be calculated via $S_{\\rm c}(\\omega)=e^2\\omega^2 S_{\\rm N}(\\omega)$,\nwhere $S_{\\rm N}(\\omega)\\equiv {\\cal F}[\\delta \\hat N(t)\\delta \\hat N(0)]$,\nwith $\\hat N =\\sum_u a^\\dg_u a_u$. The spectrum of the charge fluctuation $S_{\\rm N}(\\omega)$\ncan be evaluated straightforwardly by the established formula for the non-Markovian\n correlation function of the system operators in our previous work \\cite{Jin16083038}.\n\n\n\nThe total current in experiments reads\n $I(t) =a I_{\\rm L}(t)- bI_{\\rm R} (t)$,\n with the junction capacitance parameters ($a,b\\geq0$) satisfying $a+b=1$\n \\cite{Bla001,Wan99398,Mar10123009}.\nIn wide-band limit, $a=\\frac{\\Gamma_{\\rm R}}{\\Gamma_{\\rm L}+\\Gamma_{\\rm R}}$\n and $b=\\frac{\\Gamma_{\\rm L}}{\\Gamma_{\\rm L}+\\Gamma_{\\rm R}}$ \\cite{Wan99398}.\nThe total current noise spectrum\ncan be calculated via either\n \\be\\label{Swtotal}\n S(\\omega) = a^2S_\\text{LL}(\\omega)+b^2S_\\text{RR}(\\omega)\n-2ab\\,{\\rm Re}[S_\\text{LR}(\\omega)],\n\\ee\nor\n\\be\\label{Swtotal2}\n S(\\omega) = aS_\\text{LL}(\\omega)+bS_\\text{RR}(\\omega)\n-ab\\,S_{\\rm c}(\\omega).\n\\ee\n\n\n\nAs known, the present method is a second-order theory and applicable for weak\nsystem-reservoir coupling, i.e., $\\Gamma\\lesssim k_{\\rm B}T$. This describes the\nelectron sequential tunneling (ST) processes.\nThe resulted noise formula expressed by \\Eq{Sw}\nin principle is similar to that obtained in Ref.\\,\\onlinecite{Eng04136602}.\nThe most advantage of [\\Eq{Sw}]\nis that the involved supertoperators have well-defined in \\Eq{caljomega}\nand \\Eq{calwomega}.\nOne only needs the matrix operations where\n we should\n transform the Liouville operator ${\\cal L}_{\\tS}$ into energy difference\n in the eigenstate basis $\\{|n\\ra\\}$ ($H_{\\tS}|n\\ra=\\varepsilon_n|n\\ra$), e.g.,\n $\\la n|f({\\cal L}_{\\tS})\\hat Q|m\\ra=f(\\varepsilon_n-\\varepsilon_m)Q_{nm}$.\n\n\nIn \\Eq{Sw},\nthe memory effect enters through\nthe frequency--dependence in the last term and also in ${\\cal J}^{\\lgter}_{\\alpha}(\\omega)$\nand ${\\cal W}^{\\lgter}_{\\alpha}(\\omega)$. In the Markovian limit, \\Eq{Sw} reduces to\n\\begin{align}\\label{Swmk}\nS^{\\rm Mar}_{\\alpha\\alpha'}(\\omega)\n &= {\\rm tr}_{\\rm s}\\Big\\{{\\cal J}^{>}_{\\alpha}(0)\n \\Pi_0(\\omega)\\big[{\\cal J}^{>}_{\\alpha'}(0)\n + {\\cal W}^{>}_{\\alpha'}(0)\\big]\\bar\\rho\n\\nl&\\qquad\n+{\\cal J}^{<}_{\\alpha'}(0)\\Pi_0(-\\omega)\\big[{\\cal J}^{<}_{\\alpha}(0) +\n {\\cal W}^{<}_{\\alpha}(0) \\big]\\bar\\rho\\Big\\}\n\\nl&\\quad\n +2\\delta_{\\alpha'\\alpha}{\\rm Re}\\! \\sum_{\\sigma u v}\n {\\rm tr}_{\\rm s}\\big[ a^{\\bar\\sigma}_u C^{(\\sigma)}_{\\alpha uv }(-{\\cal L}_{\\tS})\n a^{\\sigma}_{v}{\\bar\\rho}\n \\big],\n \\end{align}\nwhere $\\Pi_0(\\omega)=[i({\\cal L}_{\\tS}-\\omega)+\\Sigma(0)]^{-1}$\nwith $\\Sigma(0)=\\sum_{\\alpha}\n\\big[{\\cal J}^{<}_{\\alpha}(0)-{\\cal J}^{>}_{\\alpha}(0)\\big]$.\nThe involved superoperators were defined in \\Eq{caljomega}\n and \\Eq{calwomega}.\nThe widely studied Markovain problems \\cite{Xu02023807,Li04085315,Li05205304,Li05066803,Luo07085325,Mar10123009}\nhad also considered the Redfield approximation with\nthe neglect of the bath dispersion $\\Lambda^{(\\pm)}_{\\alpha uv}(\\omega)$\nin \\Eq{appcw} (the imaginary part of $C^{(\\pm)}_{\\alpha uv}(\\omega)$).\n One then can easily check that\n${\\rm Re}[S^{\\rm Mar}_{\\alpha\\alpha'}(\\omega)]={\\rm Re}[S^{\\rm Mar}_{\\alpha'\\alpha}(-\\omega)]$ with $\\alpha\\neq\\alpha'$\nand $S^{\\rm Mar}_{\\alpha\\alpha}(\\omega)=S^{\\rm Mar}_{\\alpha\\alpha}(-\\omega)$ based on \\Eq{Swmk}.\nIn other words, Markovian transport corresponds to\nthe symmetrized spectrum.\n\n\n\n\n\\section{Numerical demonstrations}\n\\label{thnum}\n\n\n\nTo verify the validity of the established method,\nwe will apply it to demonstrate the quantum noise spectrum of the transport\ncurrent through interacting double quantum dots (DQDs).\n \nAll the numerical results will be further compared with exact results based on\nDEOM theory.\n\n\nThe total composite Hamiltonian of the DQDs\n contacted by the two electrodes is described by \\Eq{Htot0}.\nThe Hamiltonian for the DQDs in series is specified by,\n\\be\\label{Hs-cqd}\n H_{\\tS}= \\varepsilon_{l}a^\\dg_la_l + \\varepsilon_{r}a^\\dg_ra_r\n +U \\hat n_l \\hat n_r+\\Omega\\big(a^\\dg_{l} a_{r}+a^\\dg_{r} a_{l}\\big).\n\\ee\nwhere $U$ is the inter-dot Coulomb interaction, $\\Omega$ describes the\ninter-dot electron coherent transition,\n and $\\hat n_u=a^\\dg_u a_u$.\nThe involved states of the double dot are $|0\\ra$ for the empty double dot,\n$|l\\ra$ for the left dot occupied, $|r\\ra$\nfor the right dot occupied, and $|2\\ra\\equiv|lr\\ra$ for the two dots occupied.\nUnder the assumption of\nthe infinite intra Coulomb interaction and large Zeeman\nsplit in each dot,\nwe consider at most one electron in each dot.\nIn this space, we have $a_{l}=|0\\ra\\la l|+|r\\ra\\la2|$\nand $a_{r}=|0\\ra\\la r|-|l\\ra\\la 2|$.\nApparently, the single-electron occupied states\nof $|l\\ra$ and $|r\\ra$ are not the eigenstates of the\nsystem Hamiltonian $HS_{\\tS}$.\nIt has the intrinsic coherent Rabi oscillation\ndemonstrated by the coherent coupling strength $\\Omega$.\nThe corresponding Rabi frequency denoted by $\\Delta$ is\nthe energy difference between the two eigenstates ($\\varepsilon_{\\pm}$),\ne.g., $\\Delta=\\varepsilon_{+}-\\varepsilon_{-}=2\\Omega$ for\nthe degenerate DQDs ($\\varepsilon_{l}=\\varepsilon_{r}=\\varepsilon_{0}$) considered here.\nThe characteristic of the Rabi coherence has been well studied in the symmetrized noise spectrum\n\\cite{Luo07085325,Agu04206601,Mar11125426,Shi16095002}.\n\n\n\nNow we apply the present TL-EOM approach\nto calculate the quantum noise spectrums\nof the transport current through DQDs.\nAs we mentioned above, the TL-EOM method is suitable for weak system-reservoir\ncoupling which can appropriately\ndescribe the electron ST processes.\nWe thus consider the ST regime\nwhere the energy levels in DQDs are within the bias\nwindow ($\\mu_{\\rm L}>\\varepsilon_{0}>\\mu_{\\rm R}$).\nWithout loss of generality,\nwe set antisymmetry bias voltage with $\\mu_{\\rm L}=-\\mu_{\\rm R}=eV\/2$\nand the energy level with $\\varepsilon_{0}=0$.\nThe wide band width is considered\nwith setting $W_{\\alpha}= 300\\Gamma$\nin \\Eq{jw}.\n\n We adopt the total coupling strength of $\\Gamma=\\Gamma_{\\rm L}+\\Gamma_{\\rm R}$ as the unit of\n the energy and focus on the symmetrical coupling strength\n $\\Gamma_{\\rm L}=\\Gamma_{\\rm R}=0.5\\Gamma$ (a=b=1\/2)\nin this work.\n Furthermore, we test the upper limit of the system-reservoir coupling\nwhich is comparable to the order of the temperature ($\\Gamma\\approx k_{\\rm B}T$), with setting\n$k_{\\rm B}T=0.5\\Gamma$ here.\nDetails for the other parameters are given in the figure captions.\n\n\n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth]{fig1.eps}\n\\caption{(Color online)\nThe total and the lead-specified current noise spectra with noninteracting effect ($U=0$)\nbased on TL-EOM method (black-solid line) and exact DEOM theory (red-dash line).\n(a) The total current noise spectrum, $S(\\omega)$.\n(b) The central current fluctuation spectrum, $S_{\\rm c}(\\omega)$.\n(c) The auto-correlation noise spectrum of $R$-lead, $S_{\\rm RR}(\\omega)$.\n(d) The cross-correlation noise spectrum, ${\\rm Re}[S_{\\rm LR}(\\omega)]$.\n The other parameters (in unit of $\\Gamma$) are $\\Omega=4$\n and $eV=16$.\n}\n\\label{fig1}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth,angle=0]{fig2.eps}\n\\caption{(Color online)\nThe total and the lead-specified current noise spectra with\nwith strong inter-dot Coulomb interaction ($U=18\\Gamma$)\nbased on TL-EOM method (black-solid line) and exact DEOM theory (red-dash line).\n(a) The total current noise spectrum, $S(\\omega)$.\n(b) The central current fluctuation spectrum, $S_{\\rm c}(\\omega)$.\n(c) The auto-correlation noise spectrum of $R$-lead, $S_{\\rm RR}(\\omega)$.\n(d) The cross-correlation noise spectrum, ${\\rm Re}[S_{\\rm LR}(\\omega)]$.\n The other parameters are the same as in \\Fig{fig1}. }\n \\label{fig2}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=1.0\\columnwidth,angle=0]{fig3.eps}\n\\caption{(Color online)\nThe total and the lead-specified current noise spectra at resonance regime\n($\\varepsilon_{\\pm}=\\pm\\Omega=\\pm8\\Gamma=\\pm eV\/2$)\nwith strong inter-dot Coulomb interaction ($U=18\\Gamma$),\nbased on TL-EOM method (black-solid line) and exact DEOM theory (red-dash line).\n(a) The total current noise spectrum, $S(\\omega)$.\n(b) The central current fluctuation spectrum, $S_{\\rm c}(\\omega)$.\n(c) The auto-correlation noise spectrum of $R$-lead, $S_{\\rm RR}(\\omega)$.\n(d) The cross-correlation noise spectrum, ${\\rm Re}[S_{\\rm LR}(\\omega)]$.\n The other parameters are the same as in \\Fig{fig1}. }\n \\label{fig3}\n\\end{figure}\n\n\nThe numerical results\nof the total and the lead-specified current noise spectra are\ndisplayed in Figs.\\,\\ref{fig1}, \\ref{fig2} and \\ref{fig3}.\nThey correspond to noninteracting ($U=0$), strong inter-dot interacting ($U=18\\Gamma$),\nand the resonance regime ($U=18\\Gamma$ and $\\varepsilon_{+\/-}=\\mu_{\\rm L\/R}$), respectively.\nFurthermore, the evaluations are based on\nthe present TL-EOM method (black solid-line) and the exact DEOM theory (red dash-line).\nEvidently, the TL-EOM method reproduces well, at least qualitatively,\nall the basic features of the quantum noise spectra in the entire frequency range.\nThe detail demonstrations see below.\n\n\nFigures \\ref{fig1} depicts the noise spectra\nin the absence of the inter-dot Coulomb interaction ($U=0$).\nThe characteristics are as follow:\n(\\emph{i}) The well--known quasi-steps around\nthe energy resonances,\n$\\omega=\\pm\\omega_{\\alpha \\pm}\\equiv\\pm|\\varepsilon_{\\pm}-\\mu_\\alpha|$,\nemerge in the total noise spectrum $S(\\w)$, the displacement $S_{c}(\\w)$\nand the diagonal component, exemplified with $S_{\\rm RR}(\\w)$;\nsee the arrows in \\Fig{fig1}(a)--(c).\nThe aforementioned feature\narises from the non-Markovian dynamics\nof the electrons in $\\alpha$--electrode\ntunneling into and out of the DQDs,\naccompanied by\nenergy absorption ($\\omega>0$) and emission ($\\omega<0$), respectively.\n(\\emph{ii}) In addition,\nthe Rabi resonance at $\\omega=\\pm\\Delta\\equiv \\pm (\\varepsilon_{+}-\\varepsilon_{-})$ appears\nin\n$S(\\omega)$ [\\Fig{fig1}(a)] and $S_{\\rm RR}(\\omega)$ [\\Fig{fig1}(c)] as dips,\nwhereas in\n${\\rm Re}[S_{\\rm LR}(\\omega)]$ [\\Fig{fig1}(d)] as peaks.\nOn the other hand, in $S_{\\rm c}(\\omega)$,\nthe former aforementioned dips and peaks are accidently canceled out [see \\Fig{fig1}(b)],\nin the absence of Coulomb interaction ($U=0$).\n\n\nFigure \\ref{fig2} depicts the noise spectra in the presence of\n strong inter-dot Coulomb interaction ($U=18\\Gamma$).\n(\\emph{iii}) In contrast to \\Fig{fig1}(b),\nnow the displacement current noise spectrum $S_{\\rm c}(\\omega)$\n displays at $\\omega=\\pm\\Delta$ the Rabi coherence [see \\Fig{fig2}(b)].\nWhile the Rabi peaks are enhanced in ${\\rm Re}[S_{\\rm LR}(\\omega)]$\n [see \\Fig{fig2}(d)],\n the original Rabi dips in \\Fig{fig1}(c) become peak-dip profile\nin $S_{\\rm RR}(\\omega)$ [see \\Fig{fig2}(c)].\n(\\emph{iv}) Moreover, the Coulomb-assisted transport channels ($\\varepsilon_{\\pm}+U$)\nproduces new non-Markovian quasi-steps around $\\omega=\\pm\n\\omega_{\\alpha {\\rm u}\\pm}\\equiv\\pm|\\varepsilon_{\\pm}+U-\\mu_\\alpha|$\nin the total, displacement, and the auto-correlation current\nnoise spectra, as shown in \\Fig{fig2}(a)--(c).\n\n\n\nIn \\Fig{fig3}, we highlight the characteristics of the noise spectra\nin the resonance regime ($\\varepsilon_{+\/-}=\\mu_{\\rm L\/R}$)\nby increasing the coherent coupling strength $\\Omega$.\n(\\emph{v})\nCompared with \\Fig{fig2}, the Rabi signal\nin the absorption noise spectrum\nat $\\omega=\\Delta$ is remarkably enhanced, while\nthe signal in the emission one at $\\omega=-\\Delta$\nis negligibly small. The observation here had been explored\nin the isolation\nof competing mechanisms, such as the Kondo resonance\nemission noise spectrum \\cite{Bas12046802, Del18041412,Mao21014104}.\n\n\nThe above absorptive versus emissive feature can\n be understood in terms of steady occupation from the following two aspects:\n(1) Away from the energy resonance ($\\mu_{\\rm L}>\\varepsilon_{\\pm}>\\mu_{\\rm R}$),\n the probabilities of single-electron occupied states are nearly the same,\n$\\bar\\rho_{++}\\cong\\bar\\rho_{--}$.\nThe resulting energy absorption and emission are equivalent in the noise spectrum.\n(2) In the energy resonance ($\\varepsilon_{+\/-}=\\mu_{\\rm L\/R}$) region,\nthe stationary state is very different.\nThe lower energy state occupation on $|-\\ra$ is the majority, e.g., $\\bar\\rho_{--}\\gg\\bar\\rho_{++}$.\nThus, the Rabi feature in absorption is much stronger than\nthat in emission noise.\n\n\n\n\n\n\n\n\n\\section{Summary}\n\\label{thsum}\n\n\nIn summary, we have presented an efficient TL-EOM approach for the quantum noise\nspectrum of the transport current through interacting mesoscopic systems.\nThe established method is\nbased on the transformation of the second-order non-Markovian master equation described by\nTNL-EM into the energy-dispersed\n TL-EOM formalism by introducing the current-related density operator.\nThe resulted analytical formula of the current noise spectrum\ncan characterize the nonequilibrium transport including electron-electron Coulomb interaction and\nthe memory effect.\n\n\n\n\nWe have demonstrated the proposed method in transport through interacting-quantum-dots system,\nand find good agreement with the exact results under broad range of parameters.\nThe numerical calculations are based on both\nthe present TL-EOM method and exact DEOM theory.\nWe find that all the basic features of the lead-specified noise spectra in the entire frequency range,\nincluding energy-resonance and Coulomb-assisted non-Markovian\nquasi-steps, and the intrinsic coherent Rabi signal, at least qualitatively,\nare reconciled well with the accurate results.\nAs a perturbative theory, the present TL-EOM is applicable in the\n weak system-reservoir coupling ($\\Gamma\\lesssim k_{\\rm B}T$) regime,\n dominated by sequential tunneling processes.\n\n Other parameters such as the bias voltage and Coulomb interaction,\nare rather flexible.\n\n\n\n\n\n\n\n\n\n\n\n\n\\acknowledgments\nWe acknowledge helpful discussions with\n X. Q. Li.\n The support from the Ministry of Science and Technology of China (No. 2021YFA1200103)\n and the Natural Science Foundation of China\n(Grant No. 11447006) is acknowledged.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nAnswering the question of precisely what distinguishes our experience with\nquantum as opposed to classical physical phenomena has historically been a\ncentral element of the overall project of interpreting quantum theory. For\n\\citet[]{schrodinger1935}, for instance, the sole distinguishing feature of\nquantum theory was none other than entanglement, while for Feynman the one and\nonly quantum mystery was self-interference \\citep[vol. 3,\n 1-1]{feynman1964}. The question continues to occupy many. However in much of\nthe more recent literature it has taken on a different form. That is, it has\nbecome one of specifying a set of appropriately motivated constraints or\n`principles' that serve to distinguish quantum from classical\ntheory. \\citet*[]{clifton2003}, for instance, prove a theorem which they argue\nshows quantum mechanics to be essentially characterisable in terms of a small\nnumber of information-theoretic constraints. \\citet[]{spekkens2007}, meanwhile,\nshows that features often thought of as distinctively quantum can be manifested\nin a toy classical theory to which one adds a principled restriction on the\nmaximal obtainable knowledge of a system.\\footnote{For a discussion of both\n \\citeauthor[]{clifton2003}'s and \\citeauthor[]{spekkens2007}' results, and\n of the project in general, see \\citet[]{myrvold2010}; and see also\n \\citet[]{felline2016}.}\n\nOne feature that quantum and classical theory have in common is that the\ncorrelations manifested between the subsystems of a combined system satisfy the\ncondition that the marginal probabilities associated with local experiments on a\nsubsystem are independent of which particular experiments are performed on the\nother subsystems. It is a consequence of this condition that it is impossible to\nuse either a classically correlated or entangled quantum system to signal faster\nthan light. For this reason the condition is referred to as the `no-signalling'\ncondition or principle, even though the condition is not a relativistic\nconstraint \\emph{per se}.\n\nQuantum and classical theory do not exhaust the conceivable ways in which the\nworld could be. The world could be such that neither quantum nor classical\ntheory are capable of adequately describing the correlations between subsystems\nof combined systems. In particular the world could be such that correlations\n\\emph{stronger} than quantum correlations are possible within it. In a landmark\npaper, \\citet[]{popescu1994} asked the question of whether all such correlations\nmust violate the no-signalling condition. The surprising answer to this question\nis no. As they showed, there do indeed exist conceivable correlations between\nthe subsystems of combined systems that are stronger than the strongest possible\nquantum correlations---i.e. such that they exceed the so-called `Tsirelson\nbound' \\citep[]{tsirelson1980}---and yet non-signalling.\n\n\\citeauthor[]{popescu1994}'s result raises the question of whether some\nmotivated principle or principles can be given which would pick out quantum\ntheory---or at least some restricted subset of theories which includes quantum\ntheory---from among the space of conceivable non-signalling physical theories in\nwhich correlations at or above the Tsirelson bound occur. This question has\ndeveloped into an active research program. A particularly important result\nemerging from it is that of \\citet[]{pawlowski2009}, who show that one can in\nfact derive the Tsirelson bound from a principle they call `information\ncausality', which they describe as a generalisation of no-signalling applicable\nto experimental setups in which the subsystems of a combined system\n(e.g. spatially separated labs) may be subluminally communicating classical\ninformation with one another. \\citeauthor[]{pawlowski2009} conjecture that\ninformation causality may be a foundational principle of nature.\n\nBelow I will argue that, suitably interpreted \\citep[][]{bub2012}, the principle\ncan be regarded as a useful and illuminating answer to the question of what the\nTsirelson bound expresses about correlations which exceed it. However I will\nargue that if one wishes to think of information causality as a fundamental\nprinciple of nature---in the sense that theories which violate the principle\nshould thereby be regarded as unphysical or in some other sense\nimpossible---then it requires more in the way of motivation than has hitherto\nbeen given.\n\nWhat has typically been appealed to previously to motivate the principle is the\nintuition that a world in which information causality is not satisfied would be\n`too simple' \\citep[p. 1101]{pawlowski2009}, or `too good to be true'\n(\\citealt[p. 180]{bub2012}, \\citealt[p. 187]{bub2016}); that it would allow one\nto ``implausibly'' access remote data \\citep[ibid.]{pawlowski2009}, and that\n``things like this should not happen'' \\citep[p. 429]{pawlowski2016}. I will\nargue below that these statements are unsatisfactorily vague. Nevertheless I\nwill argue that they gesture at something that is importantly right; although\nthey are right in, perhaps, a different sense than their authors envision.\n\nMore specifically, in contrast to \\citet[]{bub2012}, who in his otherwise\nilluminating analysis of information causality argues that it is misleadingly\ncharacterised as a generalisation of the no-signalling principle, I will argue\nthat information causality can indeed be regarded as generalising no-signalling\nin a sense. To clarify this sense I will draw on the work of\nDemopoulos,\\footnote{\\label{fn:demo}I am referring to the chapter ``Quantum\n Reality'' of Demopoulos's monograph \\nocite{demopoulosForth}\\emph{On\n Theories}, which is currently being prepared for posthumous publication.}\nwho convincingly shows that no-signalling can itself be thought of as a\ngeneralisation, appropriate for an irreducibly statistical theory such as\nquantum mechanics, of Einstein's principle of the mutually independent\nexistence of spatially distant things. Einstein regarded this principle as\nnecessary for the very possibility of `physical thought', and argued that it is\nviolated by quantum mechanics \\citep[p. 187]{howard1985}. However, suitably\ngeneralised and interpreted as a constraint on physical practice, Demopoulos\nconvincingly argues that Einstein's principle is in that sense satisfied both\nin Newtonian mechanics (despite its being an action-at-a-distance theory), and\nindeed (somewhat ironically\\footnote{Demopoulos's `judo-like' argumentative\n manoeuvre is reminiscent of Bell's \\citep[cf.][p. 41]{shimony1984}.}) that\nit is satisfied in quantum mechanics, wherein it is expressed by none other\nthan the no-signalling condition.\n\nComing back to information causality, I will then argue that it can likewise be\nthought of as a further generalisation of Einstein's principle that is\nappropriate for a theory of communication. As I will clarify, in the context of\nthe experimental setups to which the principle is applicable, a failure of\ninformation causality would imply an ambiguity in the way one distinguishes\nconceptually between the systems belonging to a sender and a receiver of\ninformation. This ambiguity (arguably) makes communication theory as we know it\nin the context of such setups impossible, similarly to the way in which the\nfailure of the principle of mutually independent existence (arguably) makes\nphysical theory as we know it impossible.\n\nBefore beginning let me emphasise that the general approach represented by the\ninvestigation into information causality is only one of a number of\nprinciple-theoretic approaches that one can take regarding the question of how\nto distinguish quantum from super-quantum theories. In the kind of approach\nexemplified by the investigation into information causality, one focuses on\nsets of static correlation tables associated with quantum and super-quantum\ntheories, and in particular one disregards the dynamics of (super-)quantum\nsystems. There is another family of principle-theoretic approaches to the\nquestion, however, wherein a richer framework is considered that does include\ndynamics.\\footnote{For further references, as well as an accessible\n description of one of these reconstructions of quantum theory, see\n \\citet[]{koberinski2018}.} \\citeauthor[]{popescu1994}'s seminal\n\\citeyearpar[]{popescu1994} investigation is an example of the former type of\napproach, though they themselves consider the latter, dynamical, approach to\nhave the potential for deeper insight. For my part I do not consider any\nparticular approach to be superior. Principle-theoretic approaches to the\ncharacterisation of quantum theory augment our understanding of the world by\nilluminating various aspects of it to us. Which particular aspect of the world\nis illuminated by an investigation will depend upon the particular\nquestion---and the framework which defines it---that is asked.\\footnote{Thanks\n to Giulio Chiribella for expressing something like this statement in answer\n to a question posed to him at the workshop `Contextuality: Conceptual\n Issues, Operational Signatures, and Applications', held at the Perimeter\n Institute in July, 2017.} I am highly skeptical of the idea that any one\nframework is sufficient by itself to illuminate all. Rather, these different\nframeworks of analysis should be seen as conveying to us information---in\ngeneral neither literal nor complete---regarding different aspects of one and\nthe same reality.\n\nThe rest of this paper will proceed as follows: I will introduce\nPopescu-Rohrlich (PR) correlations in \\S\\ref{sec:prcorr}. In \\S\\ref{sec:game} I\nwill introduce the `guessing game' by which the principle of information\ncausality is standardly operationally defined. The principle of information\ncausality itself will be introduced in \\S\\ref{sec:ic}, wherein I will also\ndescribe how it can be used to derive the Tsirelson bound. I will argue in that\nsection that information causality has not been sufficiently motivated to play\nthe role of a foundational principle of nature, and in the remainder of the\npaper I will consider how one might begin to provide it with such a\nmotivation. This analysis begins in \\S\\ref{sec:demopoulos} where I describe an\nargument, due to Demopoulos, to the effect that the no-signalling condition can\nbe viewed as a generalisation, appropriate to an irreducibly statistical theory,\nof Einstein's principle of mutually independent existence interpreted\nas a constraint on physical practice. Then in \\S\\ref{sec:howposs} I argue that\na promising route toward successfully motivating information causality is to in\nturn consider it as a further generalisation of no-signalling that is\nappropriate to a theory of communication. I describe, however, some important\nobstacles that must yet be overcome if the project of establishing information\ncausality as a foundational principle of nature is to succeed.\n\n\\section{Popescu-Rohrlich correlations}\n\\label{sec:prcorr}\n\nConsider a correlated state $\\sigma$ of two two-level\nsubsystems.\\footnote{Elements of the exposition in this and the next section\n have been adapted from \\citet[]{bub2012,bub2016} and \\citet[]{pawlowski2009}.}\nLet Alice and Bob each be given one of the subsystems, and instruct them to\ntravel to distinct distant locations. Let $p(A, B|a, b)$ be the probability that\nAlice and Bob obtain outcomes $A$ and $B$, respectively, after measuring their\nlocal subsystems with the respective settings $a$ and $b$. If $A,B \\in \\{\\pm\n1\\}$, the expectation value of the outcome of their combined measurement is\ngiven by: $$\\langle a, b \\rangle = \\sum_{i, j \\in \\{1,-1\\}} (i \\cdot j) \\cdot\np(i, j|a, b),$$ where $A = i$ and $B = j$. Less concisely, this is:\n\\begin{align*}\n\\langle a, b \\rangle & = 1 \\cdot p(1,1|a, b) - 1 \\cdot p(1,\\text{-}1|a, b) - 1\n\\cdot p(\\text{-}1,1|a, b) + 1 \\cdot p(\\text{-}1,\\text{-}1|a, b) \\\\\n& = p(\\mbox{same}|a, b) - p(\\mbox{different}|a, b).\n\\end{align*}\nSince $p(\\mbox{same}|a, b)$ + $p(\\mbox{different}|a, b)$ = 1, it follows that\n$\\langle a, b \\rangle$ + $2 \\cdot p(\\mbox{different}|a, b)$ = 1, so\nthat: $$p(\\mbox{different}|a, b) = \\frac{1 - \\langle a, b \\rangle}{2}.$$\nSimilarly, we have that $$p(\\mbox{same}|a, b) = \\frac{1 + \\langle a, b\n \\rangle}{2}.$$\n\nNow imagine that $\\sigma$ is such that the probabilities for the results of\nexperiments with settings $a, b, a', b'$, where $a'$ and $b'$ are different from\n$a$ and $b$ but arbitrary \\citep[p. 382]{popescu1994}, are:\n\\begin{align}\n \\label{eqn:prprobs}\n p(1,1|a,b) & = p(\\text{-}1,\\text{-}1|a,b) = 1\/2, \\nonumber \\\\\n p(1,1|a,b') & = p(\\text{-}1,\\text{-}1|a,b') = 1\/2, \\nonumber \\\\\n p(1,1|a',b) & = p(\\text{-}1,\\text{-}1|a',b) = 1\/2, \\nonumber \\\\\n p(1,\\text{-}1|a',b') & = p(\\text{-}1,1|a',b') = 1\/2.\n\\end{align}\nIn other words, if at least one of their settings is one of $a$ or $b$, then\nAlice's and Bob's results are guaranteed to be the same. Otherwise they are\nguaranteed to be different. These correlations are called `PR' correlations\nafter \\citet{popescu1994}.\n\nAlice's marginal probability $p(1_A|a,b)$ of obtaining the outcome 1 given\nthat she measures $a$ and Bob measures $b$ is defined as: $p(1_A,1_B|a,b)$ +\n$p(1_A,\\text{-}1_B|a,b)$. The no-signalling condition requires that her marginal\nprobability of obtaining 1 is the same irrespective of whether Bob measures $b$\nor $b'$, i.e. that $p(1_A|a,b)$ = $p(1_A|a,b')$, in which case we can write her\nmarginal probability simply as $p(1_A|a)$. In general, no-signalling requires\nthat\n\\begin{align}\n \\label{eqn:nosig}\n p(A|a,b) & = p(A|a,b'), & p(A|a',b) & = p(A|a',b'), \\nonumber \\\\\n p(B|a,b) & = p(B|a',b), & p(B|a, b') & = p(B|a', b').\n\\end{align}\nThe reader can verify that the PR correlations \\eqref{eqn:prprobs} satisfy\nthe no-signalling condition \\eqref{eqn:nosig}.\n\nIf we imagine trying to simulate the PR correlations \\eqref{eqn:prprobs} with\nsome bipartite general non-signalling system $\\eta$, then the probability of a\nsuccessful simulation (assuming a uniform probability distribution over the\npossible joint measurements $(a,b)$, $(a,b')$, $(a',b)$, and $(a',b')$) is given\nby:\\footnote{By a `successful simulation' I mean a single joint measurement in\n which Alice and Bob get opposite outcomes---(1,-1) or (-1,1)---if their\n settings are $(a', b')$, or the same outcome---(1,1) or (-1,-1)---otherwise.}\n\\begin{align*}\n \\frac{1}{4}\\big(p(\\mbox{same}|a,b) + p(\\mbox{same}|a,b') +\n p(\\mbox{same}|a',b) + p(\\mbox{different}|a',b')\\big) \\\\\n = \\frac{1}{4}\\Bigg(\\frac{1 + \\langle a, b \\rangle}{2} + \\frac{1 + \\langle\n a, b' \\rangle}{2} + \\frac{1 + \\langle a', b \\rangle}{2} + \\frac{1 - \\langle\n a', b' \\rangle}{2} \\Bigg) \\\\\n = \\frac{1}{2}\\Bigg(1 + \\frac{\\langle a, b \\rangle + \\langle a, b' \\rangle +\n \\langle a', b \\rangle - \\langle a', b' \\rangle}{4}\\Bigg).\n\\end{align*}\nNotice that $\\langle a, b \\rangle + \\langle a, b' \\rangle + \\langle a', b\n\\rangle - \\langle a', b' \\rangle$ is just the Clauser-Horne-Shimony-Holt (CHSH)\ncorrelation expression \\citep[]{chsh1969}. So the probability of a successful\nsimulation of the PR correlations by $\\eta$ is:\n\\begin{align}\n \\label{eqn:succsim}\n p(\\mbox{successful sim}) = \\frac{1}{2}\\Bigg(1 + \\frac{\\mbox{CHSH}}{4}\\Bigg),\n\\end{align}\nwith CHSH = 4 if $\\eta$ is itself a PR-system.\\footnote{The reader may be\n familiar with the use of the term `PR-box' to refer to systems whose\n subsystems are correlated as in \\eqref{eqn:prprobs}. I find the term `box' to\n be misleading since it conveys the idea of a spatially contiguous region\n occupied by a combined system. Bub's \\citeyearpar[]{bub2016} banana imagery is\n far less misleading in this sense. Below I will not use figurative language at\n all, but will (boringly) refer merely to such entities as `PR-systems',\n `PR-correlated systems', and so on.} As is well known, classically correlated\nsystems are bounded by $|\\mbox{CHSH}| \\leq 2$. Thus the optimum probability of\nsimulating PR correlations with a bipartite classical system is given by 1\/2(1 +\n2\/4) = 3\/4. Quantum correlations are bounded by $|\\mbox{CHSH}| \\leq 2\\sqrt 2$.\n\n\\section{Alice and Bob play a guessing game}\n\\label{sec:game}\n\nAt this point it will be convenient to change our notation. From now on I will\nrefer to the measurement settings $a$ and $a'$ as 0 and 1, respectively, and\nlikewise for $b$ and $b'$. The outcomes 1 and -1 will also be respectively\nrelabelled as 0 and 1. This will allow us to describe PR correlations more\nabstractly using the exclusive-or (alternately: modulo two addition) operator as\nfollows:\n\\begin{align}\n \\label{eqn:xorpr}\n M_1 \\oplus M_2 = m_1 \\cdot m_2\n\\end{align}\nwhere capital letters refer to measurement outcomes and small letters to\nmeasurement settings. To illustrate, for a given 01-experiment (formerly\n$(a,b')$) there are two possible outcomes: 00 and 11 (formerly: (1,1) and\n(-1,-1)), and we have: $0 \\oplus 0 = 0 \\cdot 1$ and $1 \\oplus 1 = 0 \\cdot 1$,\nrespectively.\n\nNow imagine the following game. At the start of each round of the game, Alice\nand Bob receive random and independently generated bit strings $\\mathbf{a} =\na_{N-1},a_{N-2},\\dots,a_0$ and $\\mathbf{b} = b_{n-1},b_{n-2},\\dots,b_0$,\nrespectively, with $N = 2^n$. They win a round if Bob is able to guess the value\nof the $\\textbf{b}^{\\mbox{\\scriptsize th}}$ bit in Alice's list. For example,\nsuppose Alice receives the string $a_{7}a_{6}a_{5}a_{4}a_{3}a_{2}a_{1}a_{0}$,\nand Bob receives the string 110. Then Bob must guess the value of $a_{6}$. They\nwin the game if Bob is able to guess correctly over any sequence of rounds.\n\nBesides this the rules of the game are as follows. Before the game starts, Alice\nand Bob are allowed to determine a mutual strategy and to prepare and share\nnon-signalling physical resources such as classically correlated systems, or\nquantum systems in entangled states, or PR-systems, or other (bipartite) systems\nmanifesting non-signalling correlations. They then go off to distinct distant\nlocations, taking with them their portions of whatever systems were previously\nprepared. Once separated, Alice receives her bit string $\\mathbf{a}$ and Bob his\nbit string $\\mathbf{b}$. She is then allowed to send Bob one additional\nclassical bit $c$, upon receipt of which Bob must guess the value of Alice's\n$\\mathbf{b}^{\\mbox{\\scriptsize th}}$ bit.\n\nAlice and Bob can be certain to win the game if they share a number of\nPR-systems. I will illustrate the case of $N=4$, which requires three\nPR-systems (per round) labelled \\textbf{I}, \\textbf{II}, and \\textbf{III}. Upon\nreceiving the bit string $\\mathbf{a} = a_3a_2a_1a_0$, Alice measures $a_0 \\oplus\na_1$ on her part of system \\textbf{I} and gets the result $A_I$. She then\nmeasures $a_2 \\oplus a_3$ on her part of system \\textbf{II} and gets the outcome\n$A_{II}$. She then measures $(a_o \\oplus A_I) \\oplus (a_2 \\oplus A_{II})$ on her\npart of system \\textbf{III} and gets the result $A_{III}$. She finally sends $c\n= a_0 \\oplus A_I \\oplus A_{III}$ to Bob. Meanwhile, Bob, who has previously\nreceived $\\mathbf{b} = b_1b_0$, measures $b_0$ on his parts of systems\n\\textbf{I} and \\textbf{II}, and gets back the results $B_I$ and $B_{II}$. He\nalso measures $b_1$ on system \\textbf{III} with the result $B_{III}$.\n\nBob's next step depends on the value of $\\mathbf{b}$, i.e. on which of Alice's\nbits he has to guess. When $\\mathbf{b} = b_1b_0 = 00$ (i.e. when Bob must guess\nthe 0$^{\\mbox{\\scriptsize th}}$ bit) or $\\mathbf{b} = b_1b_0 = 01$ (i.e. when\nBob must guess the 1$^{\\mbox{\\scriptsize st}}$ bit) his guess should be:\n\\begin{align}\n \\label{eqn:guess0or1}\n c \\oplus B_{III} \\oplus B_I = a_0 \\oplus A_I \\oplus A_{III} \\oplus B_{III}\n \\oplus B_I.\n\\end{align}\nFor since $A_{III} \\oplus B_{III} = \\big((a_0 \\oplus A_I) \\oplus (a_2 \\oplus\nA_{II})\\big) \\cdot b_1$, we have:\n\\begin{align}\n \\label{eqn:b1equal0}\n & a_0 \\oplus A_I \\oplus A_{III} \\oplus B_{III} \\oplus B_I \\nonumber \\\\\n =\\mbox{ } & a_0 \\oplus A_I \\oplus b_1(a_0 \\oplus A_I) \\oplus b_1(a_2 \\oplus\n A_{II}) \\oplus B_I \\nonumber \\\\\n =\\mbox{ } & a_0 \\oplus A_I \\oplus B_I \\nonumber \\\\\n =\\mbox{ } & a_0 \\oplus b_0(a_0 \\oplus a_1).\n\\end{align}\nIf $\\mathbf{b} = 00$ then \\eqref{eqn:b1equal0} correctly yields $a_0$. If\n$\\mathbf{b} = 01$ then \\eqref{eqn:b1equal0} correctly yields $a_1$.\n\nSuppose instead that $\\mathbf{b} = 10$ or $\\mathbf{b} = 11$. In this\ncase, Bob's guess should be\n\\begin{align}\n \\label{eqn:guess2or3}\n c \\oplus B_{III} \\oplus B_{II} = a_0 \\oplus A_I \\oplus A_{III} \\oplus\n B_{III} \\oplus B_{II}.\n\\end{align}\nThis is\n\\begin{align}\n \\label{eqn:b1equal1}\n =\\mbox{ } & a_0 \\oplus A_I \\oplus b_1(a_0 \\oplus A_I) \\oplus b_1(a_2 \\oplus A_{II})\n \\oplus B_{II} \\nonumber\\\\\n =\\mbox{ } & (a_0 \\oplus A_I) \\oplus (a_0 \\oplus A_I) \\oplus (a_2 \\oplus\n A_{II}) \\oplus B_{II} \\nonumber\\\\\n =\\mbox{ } & a_2 \\oplus A_{II} \\oplus B_{II} \\nonumber\\\\\n =\\mbox{ } & a_2 \\oplus b_0(a_2 \\oplus a_3).\n\\end{align}\nIf $\\mathbf{b} = 11$ then \\eqref{eqn:b1equal1} correctly yields $a_3$. If\n$\\mathbf{b} = 10$ then \\eqref{eqn:b1equal1} correctly yields $a_2$.\n\nIn general, given $N-1$ PR-correlated systems per round,\\footnote{These are to\n be arranged in an inverted pyramid so that the results of Alice's\n (respectively, Bob's) local measurements on the first $2^{n-1}$ PR-systems\n are used to determine the local settings for her (his) next $2^{n-2}$\n measurements, and so on, for $(n-i) \\geq 0$. Note that the cost in the number\n of PR-systems needed scales exponentially with respect to the length of\n $\\mathbf{b}$. I will return to this point later.} and a single classical bit\nper round communicated by Alice to Bob, Alice and Bob can be certain to win the\ngame for any value of $N$. In other words, given these resources and a single\nclassical bit communicated to him by Alice, Bob can access the value of any\nsingle bit from her data set, however large that data set is. This result\nfurther generalises to the case where Alice is allowed to send not just one but\n$m$ bits $c_{m-1}\\dots c_0$ to Bob in a given round, and Bob is required to\nguess an arbitrary set of $m$ bits from Alice's data set. Note that if Alice is\nnot allowed to send anything to Bob, i.e., when $m$ = 0, then Bob will not be\nable to access the values of any of Alice's bits irrespective of how many\nPR-systems they share. This is a consequence of the fact that PR-correlations\nsatisfy the no-signalling principle \\eqref{eqn:nosig}.\n\n\\section{Information causality and the Tsirelson bound}\n\\label{sec:ic}\n\nAs we saw in the last section, Alice and Bob can be certain to win the guessing\ngame described there if they share a number of PR-correlated systems prior to\ngoing off to their respective locations. Note that if they do not use any\ncorrelated resources, they can still be sure to win the occasional round if\nAlice always sends Bob the value of whatever bit is at a previously agreed-upon\nfixed position $a_k$ in her list. In this case, Bob will be guaranteed to guess\ncorrectly whenever $\\mathbf{b}$ singles out $k$ (but only then; otherwise he\nmust rely on blind luck). If Alice and Bob share a sequence of classically\ncorrelated random bits, on the other hand, then Bob will be able to access the\nvalue of a single in general different $a_i$ in Alice's list on each round.\n\nNow consider the case where Alice and Bob share general no-signalling systems,\ni.e. bipartite systems such that the correlations between their subsystems\nsatisfy the no-signalling condition. Recall that the probability that a\nnon-signalling system simulates a PR-system on a given run depends on the value\nof CHSH in \\eqref{eqn:succsim} that is associated with it. For convenience we\nwill define $E =_{\\mathit{df}} \\mbox{CHSH}\/4$ so that \\eqref{eqn:succsim}\nbecomes:\n\\begin{align}\n \\label{eqn:succsim2}\n p(\\mbox{successful sim}) = \\frac{1}{2}(1 + E).\n\\end{align}\nWhen $E = 1$ for a given non-signalling system, then it just is a PR-system, and\nthe probability of a successful simulation is 1. When $E < 1$, then for given\nsettings $m_1, m_2$, the values of the outcomes $M_1, M_2$, will in general not\nsatisfy the relation \\eqref{eqn:xorpr}, i.e. $M_1 \\oplus M_2$ will not always\nequal $m_1 \\cdot m_2$. For a given attempted simulation, let us say that $M_2$\nis `correct' whenever \\eqref{eqn:xorpr} holds, and `incorrect'\notherwise.\\footnote{There is of course no reason why we should not say that\n $M_1$ rather than $M_2$ is incorrect, but for the analysis that follows it\n is convenient to take Bob's point of view.}\n\nRecall that in the $N=4$ game above, at the end of each round, Bob guesses\neither (i) $c \\oplus B_{III} \\oplus B_{I}$, or (ii) $c \\oplus B_{III} \\oplus\nB_{II}$, depending on the value of $\\mathbf{b}$. We will consider only case (i),\nas the analysis is similar for (ii). If both $B_I$ and $B_{III}$ are `correct',\nthen for that particular round, the non-signalling systems will have yielded the\nsame guess for Bob as PR-systems would have yielded:\n\\begin{align}\n \\label{eqn:prmatch}\n (c \\oplus B_{III} \\oplus B_{I})_{NS} = (c \\oplus B_{III} \\oplus B_{I})_{PR}.\n\\end{align}\nNote that if \\emph{both} $B_I$ and $B_{III}$ are \\emph{incorrect},\n\\eqref{eqn:prmatch} will still hold, since in general $x_1 \\oplus x_2 =\n\\overline{x_1} \\oplus \\overline{x_2}$. So either way Bob will guess right. The\nprobability of an unsuccessful simulation is $$1-\\frac{1}{2}(1+E) =\n\\frac{1}{2}(1-E).$$ Thus the probability that Bob makes the right guess on a\ngiven round in the $N=4$ game is:\n\\begin{align*}\n\\left(\\frac{1}{2}(1 + E)\\right)^2 + \\left(\\frac{1}{2}(1 - E)\\right)^2 =\n\\frac{1}{2}(1+E^2).\n\\end{align*}\nIn the general case, for $N = 2^n$, one can show\n\\citep[]{pawlowski2009,bub2012,bub2016} that the probability that Bob correctly\nguesses Alice's $\\mathbf{b}^{\\mbox{\\scriptsize th}}$ bit is\n\\begin{align}\n \\label{eqn:prguess}\n p_{\\mathbf{b}} = \\frac{1}{2}(1 + E^n).\n\\end{align}\n\nThe binary entropy $h(p_{\\mathbf{b}})$ associated with $p_{\\mathbf{b}}$ is given\nby $$h(p_{\\mathbf{b}}) = \\text{-}p_{\\mathbf{b}}\\log_2{p_{\\mathbf{b}}} - (1 -\np_{\\mathbf{b}})\\log_2{(1 - p_{\\mathbf{b}})}.$$ In the case where Bob has no\ninformation about Alice's $\\mathbf{b}^{\\mbox{\\scriptsize th}}$ bit,\n$p_{\\mathbf{b}} = 1\/2$ and $h(p_{\\mathbf{b}}) = 1$. If Alice then sends Bob $m$\nbits, then in general Bob's information about that bit will increase by some\nnon-zero amount. \\citet[]{pawlowski2009} propose the following constraint on\nthis quantity, which they call the `information causality' principle:\n\n\\begin{quote}\nThe information gain that Bob can reach about a previously unknown to him data\nset of Alice, by using all his local resources and $m$ classical bits\ncommunicated by Alice, is at most $m$ bits\n\\citeyearpar[p. 1101]{pawlowski2009}.\n\\label{quo:ic}\n\\end{quote}\n\nFor example, assuming that the $N = 2^n$ bits in Alice's bit string $\\mathbf{a}$\nare unbiased and independently distributed, then if Alice sends Bob a single bit\n(i.e. when $m = 1$), information causality asserts that Bob's information about\nthe $\\mathbf{b}^{\\mbox{\\scriptsize th}}$ bit in Alice's string may increase by\nno more than $1\/2^n$, i.e.,\n\\begin{align}\n \\label{eqn:infcaus}\n h(p_{\\mathbf{b}}) \\geq 1 - \\frac{1}{2^n}.\n\\end{align}\nAs \\citet[]{pawlowski2009} show, the principle is satisfied within quantum\nmechanics. But within any theory which permits correlations with a value of $E$\nexceeding $1\/\\sqrt{2}$ (i.e. any theory which allows correlations above the\nTsirelson bound), one can find an $n$ such that for a given $m$ the principle\nis violated (for example, let $E = .72$, $m = 1$, and $n = 10$).\\footnote{Note\n that when $E = 1$ the principle is always violated for any $m$ and\n $n$.}$^{\\mbox{,}}$\\footnote{I have followed Bub in expressing information\n causality as a constraint on binary entropy, as conceptually this is a more\n transparent way of expressing \\citeauthor[]{pawlowski2009}'s `qualitative'\n statement of the principle in terms of concrete information-theoretic\n quantities. While \\citeauthor[]{pawlowski2009} also relate information\n causality to the binary entropy \\citeyearpar[p. 1102 and Supplementary\n Information \\S{III}]{pawlowski2009}, their general results (that\n information causality is satisfied within quantum mechanics and that it is\n violated within any theory which allows correlations above the Tsirelson\n bound) begin with the formulation of information causality as a condition\n on mutual information rather than binary entropy. For our purposes it is\n immaterial which formulation one chooses; in particular,\n \\citet[\\S\\S{11.4--11.5}]{bub2012} has shown that \\eqref{eqn:infcaus} is\n entailed by \\citeauthor[]{pawlowski2009}'s formulation and moreover proves\n that \\eqref{eqn:infcaus} is satisfied when $E = \\frac{1}{\\sqrt 2}$.}\n\nGiven that any correlations above the Tsirelson bound will demonstrably violate\nthe principle in this sense, it is tempting to view information causality as\nthe answer to the question (i) of why nature does not allow correlations above\nthis bound. And since the Tsirelson bound represents the maximum value of the\nCHSH expression for quantum correlations, one is further tempted to view\ninformation causality as the answer to the question (ii) of why only quantum\ncorrelations are allowable in nature. Indeed, \\citeauthor[]{pawlowski2009}\nsuggest that information causality ``might be one of the foundational\nproperties of nature'' \\citeyearpar[p. 1101]{pawlowski2009}.\n\nThere is a subtlety here, however. The set of quantum correlations forms a\nconvex set which can be represented as a multi-dimensional region of points such\nthat the points within this region that are furthest from the centre are at the\nTsirelson bound \\citep[\\S{}5.1]{bub2016}. Information causality disallows\ncorrelations beyond this bound, as we saw. It also disallows some correlations\nbelow the bound that are outside of the quantum convex set \\citep[for a\n discussion, see][]{pawlowski2016}. However there is numerical evidence that\nthere exist correlations within the bound but outside of the quantum convex set\nthat satisfy the information causality principle \\citep[]{navascues2015}. So it\nappears unlikely (though this was not known in 2009) that information causality\ncan provide an answer to question (ii). It nevertheless remains promising as a\nprinciple with which to answer question (i) and can arguably still be thought of\nas a fundamental principle in that sense. Analogously, the fact that\nsuper-quantum no-signalling correlations are possible does not, in itself,\nundermine the status of no-signalling as a fundamental principle.\n\nThe information causality principle must be given some independent motivation if\nit is to play this explanatory role, however. For even a conventionalist would\nagree that some stipulations are better than others \\citep[]{disalle2002}. Thus\nsome independent reason should be given for why one might be inclined to accept\nthe principle. Of course, the statement that the communication of $m$ bits can\nyield no more than $m$ bits of additional information to a receiver about a data\nset unknown to him is an intuitive one. But foundational principles of nature\nshould require more for their motivation than such bare appeals to\nintuition. After all, quantum mechanics, which the principle aims to legitimate,\narguably already violates many of our most basic\nintuitions. \\citet[]{pawlowski2009} unfortunately do not say very much to\nmotivate information causality. But two ideas can be gleaned from statements\nmade in their paper. The first is that in a world in which violations of\ninformation causality could occur, ``certain tasks [would be] `too simple'''\n(p. 1101). The second is that in such a world there would be ``implausible\naccessibility of remote data'' (ibid.). The former idea has been expressed in\nthis general context before. Van Dam \\citeyearpar[]{vanDam2013}, notably, shows\nthat in a world in which PR-correlations exist and can be taken advantage of,\nonly a trivial amount of communication (i.e. a single bit) is required to\nperform any distributed computational task. Van Dam argues (ibid., p. 12) that\nthis is a reason to believe that such correlations cannot exist, for they\nviolate the principle that ``Nature does not allow a computational `free\nlunch''' (ibid., p. 9).\\footnote{Cf. \\citet[][]{aaronson2005a}.}\n\\citet[pp. 180-181]{bub2012} echoes this thought by listing examples of\ndistributed tasks (`the dating game' and `one-out-of-two' oblivious transfer)\nwhich would become implausibly trivial if PR-correlated systems could be used.\n\nLater in this paper I will argue that although such statements are\nunsatisfactorily vague, they nevertheless get at something that is importantly\nright; although they are right in, perhaps, a different sense than their authors\nenvision. For now let me just say that even if one accepts van Dam's argument\nthat pervasive trivial communication complexity is implausible and should be\nruled out---and that this should constitute a constraint on physical\ntheory---not all correlations above the Tsirelson bound in fact result in the\ntrivialisation of communication complexity theory.\\footnote{Communication\n complexity theory aims to quantify the communicational resources---measured\n in transmitted bits---required to solve various distributed computational\n problems. A good reference work is that of \\citet[]{hushilevitz1997}.}\n\\citet[]{brassard2006} have extended van Dam's result by showing that\n(probabilistic) pervasive trivial communication complexity can be achieved for\nvalues of $E > \\sqrt{6}\/3$. But this still leaves a range of values for $E$\nopen; physical correlations with associated values of $E$ between the quantum\nmechanical maximum of $1\/\\sqrt 2$ and $\\sqrt{6}\/3$ have not been shown to result\nin pervasive trivial communication complexity and cannot---at least not yet---be\nruled out on those grounds. Thus the avoidance of pervasive trivial\ncommunication complexity cannot be used to motivate information causality in the\nway suggested by the statements of \\citet[]{pawlowski2009}. In fairness, to say\nas they do that certain tasks would be `too simple' in a world in which\ninformation causality is violated is not the same as saying that they would be\ntrivial. The task remains, then, of expressing more precisely what is meant by\n`too simple' in a way that is sufficient to motivate ruling out theories which\nviolate the information causality principle in a less than maximal way (in\nparticular with a value of $E \\leq \\sqrt{6}\/3$). We will return to this point\nlater.\n\nRegarding their second idea---that a world in which information causality is\nviolated would manifest ``implausible accessibility of remote data''\n(p. 1101)---\\citet[]{pawlowski2009} again do not say\nenough,\\footnote{\\citet[p. 429]{pawlowski2016} do expand on the idea of\n implausible accessibility slightly: ``we have transmitted only a single bit\n and the PR-boxes are supposed to be no-signalling so they cannot be used to\n transmit the other. Somehow the amount of information that the lab of Bob\n has is larger than the amount it received. Things like this should not\n happen.'' I do not think this adds anything substantial to the idea\n expressed by \\citet[]{pawlowski2009} that such a situation is\n `implausible'.} although the idea is perhaps alluded to implicitly in another\nassertion they (too briefly) make, namely that information causality\ngeneralises the no-signalling principle (ibid., p. 1103). We will come back to\nthis point later. In any case, the idea of implausible accessibility is\nfortunately expanded upon by \\citet[]{bub2012}, who motivates it in the\nfollowing way:\n\n\\begin{quote}\nwhen the bits of Alice's data set are unbiased and independently distributed,\nthe intuition is that if the correlations can be exploited to distribute one bit\nof communicated information among the $N$ unknown bits in Alice's data set, the\namount of information distributed should be no more than $\\frac{1}{N}$ bits,\nbecause there can be no information about the bits in Alice's data set in the\npreviously established correlations themselves (p. 180).\n\\end{quote}\n\nPartly for this reason, Bub argues that the principle is misnamed. Drawing on\nthe idea of implausible accessibility he argues that `information causality'\nshould rather be referred to as information \\emph{neutrality}: ``The principle\nreally has nothing to do with causality and is better understood as a\n\\emph{constraint on the ability of correlations to enhance the information\n content of communication in a distributed task}'' (ibid., emphasis in\noriginal). Bub reformulates the principle as follows:\n\n\\begin{quote}\nCorrelations are informationally neutral: insofar as they can be exploited to\nallow Bob to distribute information communicated by Alice among the bits in an\nunknown data set held by Alice in such a way as to increase Bob's ability to\ncorrectly guess an arbitrary bit in the data set, they cannot increase Bob's\ninformation about the data set by more than the number of bits communicated by\nAlice to Bob (ibid.).\n\\end{quote}\n\nStated in this way the principle sounds plausible and seems, intuitively, to be\ncorrect. However if the principle is to be of aid in ruling out classes of\nphysical theory then it should be more than just intuitively plausible. If the\ngoal of answering the question `Why the Tsirelson bound?' is to give a\nconvincing reason why correlations that are above the bound should be regarded\nas impossible, then if the fact that such correlations violate informational\nneutrality is to be one's answer, one should give an independent motivation for\nwhy correlations must be informationally neutral. One might, for instance,\nmotivate information neutrality by showing how it generalises or gives\nexpression in some sense to a deeper underlying principle that is already\nwell-motivated, or by pointing to `undesirable consequences' of its failure. The\nconsequence of a `free computational lunch' given the existence of correlations\nabove the bound, if it could be demonstrated, could (perhaps) constitute an\nexample of the latter kind of motivation.\n\nThis said, there is a different way to think of the question `Why the Tsirelson\nbound?' for which Bub's explication of information causality in terms of\ninformational neutrality is both a full answer and indeed an illuminating and\nuseful one. In this sense the question represents a desire to understand what\nthe Tsirelson bound expresses about correlations which violate it. Information\nneutrality answers this question by directing attention to a feature that no\ncorrelations above the bound can have. This feature, moreover, is one that we\ncan easily grasp and explicitly connect operationally with our experience of\ncorrelated physical systems. On such a reading of the question, to answer\n`information neutrality' is not of course to rule out that the world could\ncontain non-informationally-neutral physical correlations. But on this view\nruling out such a possibility is not the point, which is rather to provide a\nphysically meaningful principle to help us to understand what our current\nphysical theories, assuming they are to be believed, are telling us about the\nstructure of the world.\n\nIn the remainder of this paper, however, I will continue to consider the\ninformation causality\/neutrality principle as a possible answer in the first\nsense to the question `Why the Tsirelson bound?'. I will continue to consider,\nthat is, whether there is some independent way of motivating the conclusion that\ncorrelations which violate the condition should be ruled out.\n\n\\section{The `being-thus' of spatially distant things}\n\\label{sec:demopoulos}\n\nOur goal is to determine whether there is some sense in which we can motivate\nthe idea that information causality must be satisfied by all physical theories\nwhich treat of correlated systems. I will now argue that some insight into this\nquestion can be gained if we consider the analogous question regarding\nno-signalling. As I mentioned earlier, the no-signalling condition\n\\eqref{eqn:nosig} is not a relativistic constraint per se---in itself it is\nmerely a restriction on the marginal probabilities associated with experiments\non the subsystems of combined systems---but its violation entails the ability to\ninstantaneously signal, which is in tension if not in outright violation of the\nconstraints imposed by relativistic theory.\\footnote{For a discussion of\n signalling in the context of special and general relativity see\n \\citet[Ch. 4]{maudlin2011}.} Indeed, the independently confirmed relativity\ntheory can in this sense be thought of as an external motivation for thinking of\nthe no-signalling principle as a constraint on the marginal probabilities\nallowable in any physical theory.\n\nThere is an arguably deeper way to motivate no-signalling, however, that can be\ndrawn from the work of Einstein and which has been expanded upon by\nDemopoulos.\\footnote{This is done in Demopoulos's monograph \\emph{On\n Theories}; see fn. \\ref{fn:demo}.} In the course of expressing his\ndissatisfaction with the `orthodox' interpretation of quantum theory, Einstein\ndescribed two foundational ideas---what Demopoulos calls \\emph{local realism}\nand \\emph{local action}. Realism in general, for Einstein, is a basic\npresupposition of any physical theory. It amounts to the claim that things in\nthe world exist independently of our capability of knowing them; i.e.\n\n\\begin{quote}\nthe concepts of physics refer to a real external world, i.e., ideas are posited\nof things that claim a `real existence' independent of the perceiving subject\n(bodies, fields, etc.), and these ideas are, on the other hand, brought into as\nsecure a relationship as possible with sense impressions\n(\\citealt[]{einstein1948}, as translated by \\citealt[p. 187]{howard1985}).\n\\end{quote}\n\n\\emph{Local} realism---alternately: the `mutually independent existence' of\nspatially distant things---is the idea that things claim independent existence\nfrom one another insofar as at a given time they are located in different parts\nof space. Regarding this idea, Einstein writes:\n\n\\begin{quote}\nWithout such an assumption of the mutually independent existence (the `being\nthus') of spatially distant things, an assumption which originates in everyday\nthought, physical thought in the sense familiar to us would not be possible\n(ibid.).\n\\end{quote}\n\nIn the concrete context of a physical system made up of two correlated subsystems\n$S_1$ and $S_2$ (such as that described in the thought experiment of\n\\citealt[]{epr1935}), local realism requires that\n\n\\begin{quote}\nevery statement regarding $S_2$ which we are able to make on the basis of a\ncomplete measurement on $S_1$ must also hold for the system $S_2$ if, after all,\nno measurement whatsoever ensued on $S_1$ (\\citealt[]{einstein1948}, as\ntranslated by \\citealt[p. 187]{howard1985}).\n\\end{quote}\n\nIn other words the value of a measurable theoretical parameter of $S_2$ must\nnot depend on whether a measurement is made on a system $S_1$ that is located\nin some distant region of space. (And of course it must also not depend upon\nthe \\emph{kind} of measurement performed on $S_1$;\ncf. \\citealt[][p. 186]{howard1985}.) Demopoulos notes that local realism as it\nis applied in such a context is a condition imposed on the measurable\nproperties of the theory and hence it is a condition that is imposed at a\ntheory's `surface' or operational level. This is an important point that I will\nreturn to later.\n\nIn the same \\emph{Dialectica} article Einstein also formulated a second\nprinciple:\n\n\\begin{quote}\nFor the relative independence of spatially distant things (A and B), this idea\nis characteristic: an external influence on A has no \\emph{immediate} effect on\nB; this is known as the `principle of local action' ... The complete suspension\nof this basic principle would make impossible the idea of the existence of\n(quasi-) closed systems and, thereby, the establishment of empirically testable\nlaws in the sense familiar to us (\\citealt[]{einstein1948}, as translated by\n\\citealt[p. 188]{howard1985}).\n\\end{quote}\n\nThe thought expressed in the second part of this statement seems similar to\nEinstein's earlier assertion that `physical thought' would not be possible\nwithout the assumption of local realism. However Demopoulos convincingly argues\nthat the principle of local realism, though it receives support from the\nprinciple of local action, is a conceptually more fundamental principle than\nthe latter. For conceivably the principle of local realism---i.e. of `mutually\nindependent existence'---could be treated as holding, Demopoulos argues, even\nin the absence of local action. Indeed this is so in Newtonian mechanics. For\nexample, Corollary VI to the laws of motion \\citep[p. 423]{newton1999} states\nthat a system of bodies moving in any way whatsoever with respect to one\nanother will continue to do so in the presence of equal accelerative forces\nacting on the system along parallel lines. This makes it possible to treat the\nsystem of Jupiter and its moons, for example, as a quasi-closed system with\nrespect to the sun. For owing to the sun's great distance (and relative size),\nthe actions of the forces exerted by it upon the Jovian system will be\napproximately equal and parallel. Corollary VI, moreover, is used by Newton to\nprove Proposition 3 of Book I \\citep[p. 448]{newton1999}, which enables one to\ndistinguish forces that are internal to a given system from forces that are\nexternal to it, and which provides a criterion (i.e. that the motions of the\nbodies comprising a system obey the Area Law with respect to its centre of\nmass) for determining when the gravitational forces internal to a system have\nbeen fully characterised. Thus despite its violation of local action,\nDemopoulos argues convincingly that Einstein would not (or anyway should not)\nhave regarded a theory such as Newtonian mechanics as unphysical. It is still a\nbasic \\emph{methodological} presupposition of Newtonian mechanics that\nspatially distant systems have their own individual `being thus-ness', the\ndescription of which is made possible via the theory's characteristic\nmethodological tool of successive approximation, in turn made possible by, for\nexample, Corollary VI, Proposition 3, and the notion of quasi-closed system\nimplied by them.\\footnote{Demopoulos does not specifically mention either\n Corollary VI or Proposition 3 in his discussion, but I take them to be\n implicit therein. For a detailed analysis of Newton's method of successive\n approximations and the methodological role therein played by Corollary VI\n and Proposition 3, see \\citet[]{harper2011}. For a discussion of the same\n in relation to general relativity, see \\citet[]{disalle2006, disalle2016}.}\n\nEinstein's principle of local realism or mutually independent existence\npresupposes the framework of classical physics, which itself presupposes the\nframework of classical probability theory. Demopoulos argues, however, that the\nconceptual novelty of quantum theory consists in the fact that it is an\n`irreducibly statistical theory', precisely in the sense that its probability\nassignments, unlike those described by classical probability theory, cannot in\ngeneral be represented as weighted averages of two-valued measures over the\nBoolean algebra of all possible properties of a physical system \\citep[see\n also][]{pitowsky1989, pitowsky2006, dickson2011}. This raises the question of\nwhether one can formulate a generalisation of the mutually independent existence\ncondition that is appropriate for an irreducibly statistical theory such as\nquantum mechanics.\\footnote{I am not claiming here that Einstein himself would\n have been inclined to follow this line of reasoning.}\n\nRecall that Einstein's mutually independent existence condition is a condition\nthat is imposed on the level of the measurable parameters of a theory and hence\nat its `surface' or operational level. It requires, in particular, that the\nvalue of a measurable property of a system $S_1$ in some region of physical\nspace $R_1$ is independent of what kind of measurement (or whether any\nmeasurement) is performed on some system $S_2$ in a distant region of space\n$R_2$, irrespective of whether $S_1$ and $S_2$ have previously interacted.\n\nDemopoulos argues that in the context of an irreducibly statistical theory such\nas quantum mechanics, it is in fact the no-signalling condition which\ngeneralises the mutually independent existence condition. It does so in the\nsense that like mutually independent existence, no-signalling is a\nsurface-level constraint on the local facts associated with a particular\nsystem, requiring that these facts be independent of the local surface-level\nfacts associated with other spatially distant systems. Unlike the mutually\nindependent existence condition, however, these local facts refer to the\nmarginal probabilities associated with a system's measurable properties rather\nthan with what one might regard as those properties themselves. Specifically,\nno-signalling asserts that the marginal probability associated with a\nmeasurement on a system $S_1$ at a given location $R_1$ is independent of what\nkind of measurement (or whether any measurement) is performed on some system\n$S_2$ in a distant region of space $R_2$.\\footnote{It is worth noting that the\n parameter independence condition (\\citealt[]{shimony1993}) is just the\n no-signalling condition extended to include a hypothetical, possibly\n hidden, set of \\emph{underlying} parameters.} In this way no-signalling\nallows us to coherently treat systems in different regions of physical space as\nif they had mutually independent existences---i.e. as quasi-closed systems in\nthe sense described above---and thus allows for the possibility of `physical\nthought' in a methodological sense and for ``the establishment of empirically\ntestable laws in the sense familiar to us'' (\\citealt[]{einstein1948}, as\ntranslated by \\citealt[p. 188]{howard1985}). Demopoulos argues that quantum\nmechanics, even under its orthodox interpretation, is in this way legitimated\nby the principle and may be thought of as a local theory of nonlocal\ncorrelations.\n\n\\section{Mutually independent existence and communication}\n\\label{sec:howposs}\n\nIn the previous section we saw that no-signalling can be regarded as\ngeneralising a criterion for the possibility of `physical thought' originally\nput forward by Einstein. And we saw that since quantum mechanics satisfies\nno-signalling, one may think of that theory, even under its orthodox\ninterpretation, as in this sense legitimated methodologically by the\nprinciple. As we saw in \\S\\S\\ref{sec:prcorr}-\\ref{sec:ic}, however, other\nconceivable physical theories---some of which allow for stronger-than-quantum\ncorrelations---satisfy the no-signalling condition as well. In light of this,\n`information causality' (or `information neutrality', in Bub's terminology) was\nput forward by \\citet[]{pawlowski2009} as an additional foundational principle\nfor more narrowly circumscribing the class of physically sensible theories. But\nin \\S\\ref{sec:ic} I argued that the principle requires further motivation\nbefore it can legitimately be seen as playing this role. With our recent\ndiscussion of no-signalling in mind, let us now consider the proposal of\n\\citeauthor[]{pawlowski2009} again.\n\n\\emph{No-signalling} asserts that the marginal probabilities associated with\nAlice's local measurements on a system $S_A$ in a region $R_A$ are independent\nof what kind of measurement (or whether any measurement) is performed by Bob\nlocally on a system $S_B$ in a distant region $R_B$. \\emph{Information\n causality} asserts that Bob can gain no more than $m$ bits of information\nabout Alice's data set if she sends him only $m$\nbits. \\citet[p. 1101]{pawlowski2009} remark that ``The standard no-signalling\ncondition is just information causality for $m = 0$''. \\citet[p. 180]{bub2012}\nconsiders this remark to be misleading, but presumably all that\n\\citeauthor[]{pawlowski2009} intend is that if Alice and Bob share\n\\emph{signalling} correlations, then Alice may provide Bob with information\nabout her data set merely by measuring it, i.e. without actually sending him\nany bits. The information causality principle disallows this for any value of\n$E$, as does no-signalling.\\footnote{That is, for any value of $E$ within the\n allowed range of: $\\text{-}1 \\leq E \\leq 1$.}\n\n\\begin{figure}[t]\n\\footnotesize\n$$\n\\begin{array}{l | l | l || l | l}\n b_0 & a_2 & a_3 & a_2 \\oplus a_3 & G \\\\ \\hline\n 0 & 0 & 0 & 0 & 0 \\\\ \\hline\n 0 & 0 & 1 & 1 & 0 \\\\ \\hline\n 0 & 1 & 0 & 1 & 1 \\\\ \\hline\n 0 & 1 & 1 & 0 & 1 \\\\ \\hline\n 1 & 0 & 0 & 0 & 0 \\\\ \\hline\n 1 & 0 & 1 & 1 & 1 \\\\ \\hline\n 1 & 1 & 0 & 1 & 0 \\\\ \\hline\n 1 & 1 & 1 & 0 & 1\n\\end{array}\n$$\n\\caption{A summary of the possible outcomes associated with Bob's measurement\n $G$ (his `guess') in the guessing game of \\S\\ref{sec:game}, based on\n Eq. \\eqref{eqn:b1equal1}. If all atomic variables are assumed to be equally\n likely to take on a value of 0 or 1, then $G$ is probabilistically independent\n of Alice's measurement setting $a_2 \\oplus a_3$, but not of its components\n $a_2$ and $a_3$, since, for example, $p(G=0|a_2=0) = 3\/4 \\neq p(G=0|a_2=1)$,\n and $p(G=0|a_3=0) = 3\/4 \\neq p(G=0|a_3=1)$.}\n\\label{fig:wittprob}\n\\end{figure}\n\nOn the other hand when (for instance) $m = 1$, then in the case where they have\npreviously shared PR-correlated systems (i.e. systems such that $E = 1$), one\nmight argue that there arises a subtle sense in which the probabilities of Bob's\nmeasurement outcomes can be influenced by Alice's remote measurement\nsettings. Consider the outcome of Bob's combined measurement $G =_{\\mathit{df}}\nc \\oplus B_{III} \\oplus B_{II}$, i.e. his `guess' \\eqref{eqn:guess2or3}. From\n\\eqref{eqn:b1equal1} it would appear that Bob's outcome is in part determined by\nthe setting of Alice's measurement on system \\textbf{II}, $a_2 \\oplus a_3$,\nsince this appears explicitly in the equation. However in this case appearances\nare misleading, for the reader can verify that $G$ is probabilistically\nindependent of $a_2 \\oplus a_3$ (see figure \\ref{fig:wittprob}). $G$ is\nnevertheless probabilistically dependent on both of $a_2$ and $a_3$ considered\nindividually. So one might say that although the outcome of $G$ is not\ninfluenced by any of Alice's measurement settings \\emph{per se}, it does seem to\nbe influenced by the particular way in which those settings have been determined\n(despite the fact that neither $a_2$ nor $a_3$ are directly used by Alice to\ndetermine the value of the bit that she sends to Bob, $c$). Put a different way,\nthe constituents of Alice's measurement setting on system \\textbf{II}\nrespectively determine the two possible outcomes of Bob's guess whenever he\nperforms the measurement $G$ (for a given $b_0$). Likewise in the case where Bob\nmeasures $G' = c \\oplus B_{III} \\oplus B_I$ (i.e. his guess\n\\eqref{eqn:guess0or1}); the two possible outcomes of $G'$ are, respectively,\ndetermined by the constituents of Alice's measurement settings on system\n\\textbf{I}, $a_0$ and $a_1$ (for a given $b_0$).\n\nNote that since $a_2$ and $a_3$ (respectively: $a_0$ and $a_1$), besides being\nthe constituents of Alice's measurement settings on \\textbf{II} (respectively:\n\\textbf{I}), are also in fact the values of bits in Alice's list $\\mathbf{a}$,\nthe above considerations resonate with Bub's remark (quoted above) that\nTsirelson-bound-violating correlations are such that they may themselves include\ninformation about Alice's data set in the context of a game like that described\nin \\S\\ref{sec:game}. These considerations further suggest a sense, \\emph{pace}\nBub, in which it could be argued that the name `information causality' is indeed\napt. For the bit of information $c$ that Alice sends to Bob can be thought of as\nthe `enabler' or `cause', at least in a metaphorical sense, of Bob's ability to\nuse this aspect of the correlations to his advantage\n\\citep[cf.][\\S{}3.4]{pawlowski2016}.\\footnote{Perhaps, though, a better name\n would be the `\\emph{no} information causality' principle.}\n\nThus one can think of information causality as generalising no-signalling (in\nthe context of the protocol under which information causality is operationally\ndefined) in two ways. On the one hand information causality generalises\nno-signalling in the sense alluded to by \\citeauthor[]{pawlowski2009}; i.e. it\nreduces to no-signalling for $m = 0$. On the other hand information causality\ngeneralises no-signalling in the sense that, like the no-signalling principle,\nit expresses a restriction on the accessibility of the remote measurement\nsettings of a distant party; but this restriction now applies not just to those\nremote measurement settings themselves, but also more generally to the\ncomponents by which those measurement settings are determined. Since, as we saw\nin the previous section, no-signalling is already well-motivated in the sense\nthat it gives expression within quantum mechanics to an arguably fundamental\nassumption that is implicit in physical practice, the very fact that\ninformation causality generalises no-signalling can be taken as a compelling\nmotivation for it.\n\nSuch a conclusion would be too quick, however, for it does not follow from the\nfact that information causality generalises no-signalling that it continues to\ngive expression to the condition of mutually independent existence. But it is\nmutually independent existence which, as we saw, motivates no-signalling as a\nconstraint on physical theories. Thus we must still ask whether a violation of\ninformation causality would result in a violation of the mutually independent\nexistence condition in some relevant sense. Arguably this is indeed the\nsituation one is confronted with in the context of the guessing game described\nabove when it is played with Tsirelson-bound-violating correlated systems. On\nthe one hand, when Alice and Bob share maximally super-quantum systems\n(i.e. PR-systems, for which $E = 1$), then after receiving $c$ there is a sense\nin which Alice's system can be said to be `a part' of Bob's system in the\ncontext of the game being played. For after receiving $c$ Bob has\n\\emph{immediate} access to the value of any single bit of Alice's that he would\nlike. Alice's bits may as well be his own for the purposes of the game. Indeed,\nfrom this point of view the fact that the communication complexity associated\nwith any distributed computational task is trivial when PR-correlations are\nused seems natural; for once Alice's and Bob's systems are nonlocally joined in\nthis way there is naturally no need for further communication. On the other\nhand, when Tsirelson-bound-violating correlations that are non-maximal are\nused, trivial communication complexity has not been shown to result in all\ncases. But mutually independent existence is nevertheless violated in the sense\nthat the correlations shared prior to the beginning of the game, upon being\n`activated' by Alice's classical message $c$ to Bob, contribute information\nover and above $c$ to the information Bob then gains about Alice's data set;\nthey `implausibly' enhance the accessibility of Alice's data set by nonlocally\njoining Alice to Bob, at least to some extent, in the sense just described.\n\nNow it is one thing to claim that information causality gives expression to a\ngeneralised sense of mutually independent existence. It is another, however, to\nclaim that mutually independent existence should be thought of as necessary in\nthis context. Recall that in the last section we saw that mutually independent\nexistence (arguably) must be presupposed if `physical thought' is to be\npossible---in other words that it is (arguably) a fundamental presupposition\nimplicit in physical practice as such. And we saw that a form of this principle\nholds in the context of Newtonian mechanics, which may be thought of as in that\nsense a local theory of nonlocal forces. We also saw that a form of\nmutually independent existence appropriate for an irreducibly statistical\ntheory---i.e. the no-signalling principle---holds in the context of quantum\nmechanics, and that it may thus be thought of analogously as a local theory of\nnonlocal correlations. The context of our current investigation is one which\ninvolves considering communicating agents capable of building and manipulating\nphysical systems---thought of now as resources---for their own particular\npurposes. Our context, that is, is the `practical' one associated with quantum\ncomputation and information theory, recently described by \\citet[]{cuffaro2017,\n cuffaroForthB}.\\footnote{Similar ideas have been expressed by\n \\citet{pitowsky1990, pitowsky1996, pitowsky2002}.} As Cuffaro has argued,\nthis context of investigation is in fact distinct from the more familiar\n`theoretical' context that is associated with traditional foundational\ninvestigations of quantum mechanics. A different way of putting this is that\nquantum computation and information theory are `resource' or `control' theories\nsimilarly to the science of thermodynamics \\citep[]{myrvold2011, wallace2014,\n ladyman2018}. Thus the question of whether mutually independent existence is\nnecessary for the practice of quantum information and communication complexity\ntheory is a distinct question from the question of whether it is necessary for\nphysical practice in the traditional sense.\n\nWithout the presupposition of mutually independent existence---according to\nwhich systems that occupy distinct regions of space are to be regarded as\nexisting independently of one another---the idea of a (quasi-) closed system\nthat can be subjected to empirical test, and in this sense `physical thought',\nwould not be possible (or anyway so argued Einstein). Analogously, one could\nargue that in the context of a theory of communication---i.e. of the various\nresource costs associated with different communicational protocols and their\ninterrelations---that it is necessary to presuppose that an operational\ndistinction can be made between the parties involved in a communicational\nprotocol. One might argue, that is, that it is constitutive of the very idea of\ncommunication that it is an activity that takes place between what can be\neffectively regarded as two mutually independently existing entities, and\nmoreover that such a distinction is presupposed when one quantifies the\ncomplexity of a particular\nprotocol.\\footnote{Cf. \\citet[p. x]{hushilevitz1997}. Cf. also\n \\citeauthor[]{maroney2018}'s \\citeyearpar[]{maroney2018} emphasis on the\n initialisation and readout stages of an information processing task.} For\nwithout the ability to make such an effective distinction between the systems\nbelonging to the sender and the receiver of information, it is not at all\nobvious how one should begin to quantify the amount of information that is\nrequired to be sent \\emph{from} Alice \\emph{to} Bob in the context of a\nparticular protocol. From this point of view it is indeed not surprising that\ncommunication complexity theory becomes impossible (in the sense that all\ncommunicational problems become trivially solvable) when PR-correlated systems\nare available to use.\n\n\\section{Objections}\n\\label{sec:obj}\n\nAn objection to this line of thought is the following. Cannot something similar\nbe said in the context of the information causality game when Alice and Bob\nshare an entangled quantum system? For arguably \\citep[cf.][]{howard1989} Alice\nand Bob will become likewise inseparable or `nonlocally joined' in such a\nscenario. And yet no one imagines the very possibility of the sciences of\nquantum information theory and quantum communication complexity to have been\nundermined as a result. So why should one believe them to be undermined by the\npossibility of sharing systems whose correlations violate the Tsirelson bound?\nThis objection, however, involves a description of the situation regarding the\nsharing of an entangled quantum system that is below the surface-level\ncharacterisation that is relevant to our discussion. It therefore does not\nundermine the considerations of the previous section.\n\nConsider the description of a classical bipartite communication protocol. Both\nbefore and after communication has taken place, such a description may be\nregarded as decomposable into three parts: a sending system, a receiving\nsystem, and something communicated between them. For a quantum protocol the\npossibility of such a decomposition is in general far less obvious as a result\nof the well-known conceptual intricacies associated with entangled quantum\nstates. However whether or not Alice and her system, and Bob and his system,\nare `in reality' inseparably entangled with one another, it remains the case,\nboth before (because of quantum mechanics' satisfaction of the no-signalling\ncondition) and after the communication of a classical message (because of\nquantum mechanics' satisfaction of the information causality condition), that\nAlice's system, Bob's system, and the message $c$ may be operationally\ndistinguished from one another in the sense that Bob cannot take advantage of\nthe underlying connection he has with Alice and her system via the correlations\nhe shares with her to gain information about her data set over and above what\nhas been provided to him via $c$. It is true that previously shared quantum\ncorrelations enable one to communicate with greater efficiency than is possible\nusing only previously shared classical correlations. As \\eqref{eqn:prguess}\nshows, Bob has a higher probability of guessing correctly in the information\ncausality game if he and Alice have previously shared quantum as opposed to\nclassical correlations.\\footnote{This is true in other contexts besides that of\n the information causality game. See, e.g., \\citet[]{buhrman2001,\n brukner2002, brukner2004}.} And the question arises regarding the source\nof this increased communicational power. But whatever that source is, it is not\nthe case that it manifests itself in nonlocality or nonseparability at the\n\\emph{operational} level.\\footnote{Compare this with \\citet[]{buhrman2001},\n who writes that entanglement enables one to ``\\emph{circumvent} (rather\n than simulate) communication'' (p. 1831, emphasis in original), and also\n with \\citet[]{bub2010}'s discussion of entanglement in the context of\n quantum computation, which he argues allows a quantum computer to compute a\n global property of a function by performing fewer, not more, computations\n than classical computers.} This is in contrast to systems whose correlations\nviolate the Tsirelson bound.\n\nBut the game described by \\citet[]{pawlowski2009} involves the communication of\n\\emph{classical} bits from Alice to Bob. Might not this limitation in Bob's\nability to take advantage of his underlying connection with Alice be overcome if\nwe allow her to send him qubits rather than only classical bits? Indeed, it is\nwell known that if Alice sends a qubit to Bob that is entangled with a qubit\nthat is already in his possession, then Alice and Bob can implement the\n`superdense coding' protocol \\citep[\\S{}2.3]{nielsenChuang2000}; Alice's sending\nof a single qubit to Bob according to this protocol will allow him to learn two\nbits' worth of classical information.\\footnote{In the context of a suitably\n generalised version of the information causality game, it turns out that a\n two-bit information gain per qubit constitutes an upper bound\n \\citep[]{pitaluaGarcia2013}.} Does this not undermine the claim that quantum\ncorrelations contribute nothing over and above whatever message is sent between\nAlice and Bob to the information gained by him?\n\nIt does not. On the one hand, before the transmission of the qubit(s) from Alice\nto Bob, no-signalling implies that Alice and Bob can be considered as\noperationally separable despite their sharing an entangled system, as we have\nseen above. On the other hand, in the superdense coding protocol, after Alice\ntransmits her message to Bob, all of the correlated quantum system that was\ninitially shared is now in Bob's possession. So after transmission there is no\nsense in which Bob can take advantage of correlations shared with Alice at that\ntime. In a sense Alice's message to Bob `just is' information regarding the\ncorrelations that exist between them at the time at which she sends\nit.\\footnote{This conclusion is essentially that of\n \\citet[p. 032110-20]{spekkens2007}. Fascinatingly, Spekkens also shows that\n the superdense coding protocol can be implemented in his toy classical\n theory.}\n\nAs we have seen, when Alice and Bob share PR-correlated systems, they can win a\nround with certainty in the $m = 1$ game for any $N$ by exchanging a single\nclassical bit. Earlier I also mentioned \\citeauthor[]{vanDam2013}'s\n\\citeyearpar[]{vanDam2013} result to the effect that PR-correlated systems allow\none to perform \\emph{any} distributed computational task with only a trivial\namount of communication. These results are striking. However the reader may\nnevertheless feel somewhat unimpressed by them for the following reason: the\nnumber of PR-correlated systems required to implement these protocols, as we\nhave seen, is great. With respect to the length $n$ of Bob's bit string\n$\\mathbf{b}$ (arguably the most appropriate measure of input size for the game),\nimplementing the solution described above requires that they share $2^n-1$\nPR-systems; i.e. the number of PR-systems required grows exponentially with the\ninput size. Likewise for van Dam's protocol.\\footnote{Specifically, van Dam's\n \\citeyearpar[]{vanDam2013} protocol requires a number of systems that can\n grow exponentially with respect to the input size of an instance of the\n Inner Product problem, after which the solution can be efficiently converted\n into a solution to any other distributed computational problem.} A reduction\nin \\emph{communication} complexity has therefore been achieved only at the\nexpense of an increase in \\emph{computational} complexity. One might argue that\nit is in this sense misleading to consider the complexity of implementing the\nprotocol with PR-correlated systems to be trivial---that they provide us with a\n`free lunch'.\n\nI will return to this point later. But for now let me say that, arguably, this\nis not a relevant consideration in this context. The theories of communication\ncomplexity and computational complexity are distinct sub-disciplines of computer\nscience. The goal of communication complexity is to quantify the amount of\ncommunication necessary to implement various communicational protocols. For this\npurpose one abstracts away from any consideration of how complicated a\ncomputational system must be in other respects \\citep[]{hushilevitz1997}. The\nquestion addressed in \\citet[]{vanDam2013} and in \\citet[]{pawlowski2009} and\n\\citet{pawlowski2016} concerns whether the availability of PR-correlated systems\nwould make communicational, not computational, complexity theory\nsuperfluous. From this point of view any previously prepared PR-correlated\nsystems are viewed as `free resources' for the purposes of the analysis.\n\nThis said, one can imagine that the subsystems of PR-correlated systems employ\nsome hidden means of communication with one another, and then argue that this\nmust be included in the complexity ascribed to the protocol. This would of\ncourse constitute a descent below the empirically verifiable level. In itself\nthis is obviously not objectionable. But it is hard to see what use this would\nbe to a theory of communicational complexity, which after all, like\ncomputational complexity \\citep[]{cuffaro2018}, aims to be a practical science\nwhose goal is to guide us in making distinctions in practice between real\nproblems related to data transmission that are of varying levels of\ndifficulty. In this sense appealing to unseen and unmanipulable communication\nbetween the subsystems of PR-systems does not help with the conclusion that\ncommunication complexity theory, at least in an operational sense, becomes\nsuperfluous if PR-correlated systems are available. The objection addressed in\nthe previous two paragraphs is nevertheless an important one that I will return\nto.\n\nAbove I have motivated the idea, of \\citet[p. 1101]{pawlowski2009}, that the\nkind of accessibility of remote data that is possible given the existence of\ncorrelated systems which violate the Tsirelson bound is `implausible'. I have\ndone so by describing, \\emph{pace} \\citet[]{bub2012}, the sense in which\ninformation causality can be taken to generalise no-signalling. In so doing I\nhave gestured at a connection between the idea of implausible accessibility and\nthe \\emph{prima facie} separate idea that a world in which\nTsirelson-bound-violating correlated systems exist would be `too good to be\ntrue' in a communicational complexity-theoretic sense. My arguments have been\nmainly conceptual. I have argued, that is, that a kind of conceptual ambiguity\nat the operational level between the parties to a communicational protocol may\nresult if correlations which violate the Tsirelson bound are available to\nuse. As we have seen, when such stronger-than-quantum correlations are strong\nenough (i.e. when $E > \\sqrt{6}\/3$), this results in the trivial communicational\ncomplexity of any distributed computational task. But trivial communicational\ncomplexity does not result, or anyway has not yet been shown to result, for\nvalues of $E$ above the Tsirelson bounded value of $1\/\\sqrt 2$ that are below\n$\\sqrt{6}\/3$. This is despite the fact that the conceptual ambiguity I have\ndescribed is present to some extent for all such values of $E$.\n\n\\begin{sloppypar}\nThus one may wonder whether `a little' ambiguity may be tolerable for practical\npurposes---whether, that is, a theory which admits correlations which only\n`weakly' violate the Tsirelson bound should be admitted within the space of\npossible physical theories from the point of view of the information causality\nprinciple. The situation could be seen as analogous to the situation one is\nfaced with in Newtonian Mechanics, in fact, for Corollary VI (which I described\nin \\S\\ref{sec:demopoulos}) only guarantees that a system in the presence of\nexternal forces can be treated as (quasi-) closed when these forces act\n\\emph{exactly} equally upon it and are \\emph{exactly} parallel. Clearly this is\nnot the case for the Jovian system \\emph{vis-\\'a-vis} the sun, for\nexample. Corollary VI---and Proposition 3---nevertheless function as\nmethodological tools in that they allows us to maintain the idea of the\nmutually independent existence of spatially distant things as a methodological\nprinciple and treat the Jovian system, for the practical purpose of analysing\nits internal motions, as unaffected by the forces exerted upon it by the sun.\n\\end{sloppypar}\n\nThere is much work to be done before information causality can be considered as\nsuccessful in ruling out---in the conceptual sense described in the previous\ntwo paragraphs---\\emph{all} theories whose correlations violate the Tsirelson\nbound. Irrespective of whether this goal can be achieved, however, this does\nnot necessarily undermine the status of information causality motivated as a\nmethodological principle in something like the way that I have done in this\npaper. In particular, information causality would be especially compelling if\none could draw a relation between the degree of violation of the principle and\nthe degree of `superfluousness' of the resulting theory of communication\ncomplexity with an eye to distinguishing `weak' violations of the Tsirelson\nbound from more objectionable violations. Thus there is much work to do in any\ncase.\n\nI close with the following more fundamental objection. Why should nature care\nwhether beings such as us are able to engage in communication complexity\ntheory? In fact there is no fundamental reason why nature should\ncare. Analogously, there is no fundamental reason why nature should care\nwhether beings such as us can do physics. But the goal of empirical science is\nnot to derive the structure of the world or its constituent entities by way of\na priori or `self-evident' principles. It is rather to make sense of and\nexplain our experience of and in the world, as well as to enable us to predict\nand to control aspects of that world for whatever particular practical purposes\nwe may have. In fact we have a science which is called physics. And in fact we\nhave a science which we refer to as communication complexity theory. The\nprinciple of mutually independent existence, and analogously the principle of\ninformation causality, may be thought of as answers to the question: `how are\nsuch facts possible?' in the sense that they aim to identify the necessary\nsuppositions implicit in \\emph{any} such theories and in our practice of\nthem.\\footnote{Cf. \\citet[pp. B20-B21]{kant1781german}.}\n\nThat said, these may not be definitive answers. The necessity of presupposing\nEinstein's mutual independence and local action principles for the purposes of\ntheory testing has been questioned by \\citet[]{howard1989}. In a similar way,\none might argue that it is wrong to think that the existence of correlated\nsystems which `strongly' violate the Tsirelson bound would make any science of\ncommunication complexity impossible. Rather, one might conclude instead that\nthe idea of a science of communication complexity that is wholly independent of\n\\emph{computational} complexity-theoretic considerations is unachievable. This,\none might argue, is the real lesson to take away from the fact that an\nexponential number of PR-correlated systems is required to implement Alice's\nand Bob's solution to their guessing game. Yet even if this were all that we\nlearned from information causality, it would still represent a significant\nadvance in our understanding of the structure of our theoretical knowledge---an\nunderstanding of the physically motivated constraints under which two\nmathematical \\emph{theories} may be regarded as mutually independent.\n\n\\section{Summary}\n\\label{sec:conc}\n\nAbove I have argued that the principle of information causality has not yet\nbeen sufficiently motivated to play the role of a foundational principle of\nnature, and I have described a way in which one might begin to provide it with\nsuch a motivation. More specifically I described an argument, due to\nDemopoulos, to the effect that the no-signalling condition can be viewed as a\ngeneralisation, appropriate to an irreducibly statistical theory, of Einstein's\nprinciple of mutually independent existence interpreted as a constraint on\nphysical practice. I then argued that information causality can in turn be\nmotivated as a further generalisation of no-signalling that is appropriate to a\ntheory of communication. I closed by describing a number of important obstacles\nthat are required to be overcome if the project of establishing information\ncausality as a foundational principle is to succeed.\n\n\\bibliographystyle{apa-good}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\subsection{Cumulative Absolute Value Estimations of $\\delta$-Bounded Polynomials}\n\\label{s:values}\n\nTo bound the total error of the algorithm, in Section~\\ref{s:pip_together}, we need an upper bound on $\\sum_{j \\in N} \\tb_j$, i.e., on the sum of the cumulative absolute value estimations at the top level of the decomposition of a $\\beta$-smooth $\\delta$-bounded polynomial $p(\\vec{x})$. In this section, we show that $\\sum_{j \\in N} \\tb_j = O(d^2 \\beta n^{d-1+\\delta})$. This upper bound is an immediate consequence of an upper bound of $O(d\\beta n^{d-1+\\delta})$ on the sum of the absolute value estimations, for each level $\\ell$ of the decomposition of $p(\\vec{x})$. \n\nFor simplicity and clarity, we assume, in the statements of the lemmas below and in their proofs, that the hidden constant in the definition of $p(\\vec{x})$ as a $\\delta$-bounded polynomial is $1$. If this constant is some $\\kappa \\geq 1$, we should multiply the upper bounds of Lemma~\\ref{l:abs_est} and Lemma~\\ref{l:cum_est} by $\\kappa$. \n\\begin{lemma}\\label{l:abs_est}\nLet $p(\\vec{x})$ be an $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial. Also let $\\rho_{i_1 \\ldots i_{d-\\ell}}$ and $\\rb_{i_1 \\ldots i_{d-\\ell}}$ be the estimations and absolute value estimations, for all levels $\\ell \\in \\{1, \\ldots, d-1\\}$ of the decomposition of $p(\\vec{x})$ and all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, computed by Algorithm~\\ref{alg:estimate} and used in ($d$-LP) and ($d$-IP). Then, for each level $\\ell \\geq 1$, the sum of the absolute value estimations is:\n\\begin{equation}\\label{eq:abs_est}\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\rb_{i_1\\ldots i_{d-\\ell}} \\leq\n \\ell\\beta n^{d-1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe proof is by induction on the level $\\ell$ of the decomposition. For the basis, we recall that for $\\ell = 1$, level-$1$ absolute value estimations are defined as \n\\[ \\rb_{i_1\\ldots i_{d-1}} = \\sum_{j \\in N} |\\rho_{i_1\\ldots i_{d-1} j}|\n = \\sum_{j \\in N} |c_{i_1\\ldots i_{d-1} j}|\n\\]\nThis holds because, in Algorithm~\\ref{alg:estimate}, each level-$0$ estimation $\\rho_{i_1\\ldots i_{d-1} i_d}$ is equal to the coefficient $c_{i_1\\ldots i_{d-1} i_d}$ of the corresponding degree-$d$ monomial. Hence, if $p(\\vec{x})$ is a degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial, we have that\n\\begin{equation}\\label{eq:bounded_level1}\n \\sum_{(i_1, \\ldots, i_{d-1}) \\in N^{d-1}} \\rb_{i_1\\ldots i_{d-1}}\n = \\sum_{(i_1, \\ldots, i_{d-1}, j) \\in N^{d}} |c_{i_1\\ldots i_{d-1} j}|\n \\leq \\beta n^{d-1+\\delta}\n\\end{equation}\nThe upper bound holds because by the definition of degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomials, for each $\\ell \\in \\{ 0, \\ldots, d \\}$, the sum, over all monomials of degree $d-\\ell$, of the absolute values of their coefficients is $O(\\beta n^{d-1+\\delta})$ (and assuming that the hidden constant is $1$, at most $\\beta n^{d-1+\\delta}$). In (\\ref{eq:bounded_level1}), we use this upper bound for $\\ell = 0$ and for the absolute values of the coefficients of all degree-$d$ monomials in the expansion of $p(\\vec{x})$.\n\nFor the induction step, we consider any level $\\ell \\geq 2$. We observe that any binary vector $\\vec{x}$ satisfies the level-$(\\ell-1)$ constraints of ($d$-LP) and ($d$-IP) with certainty, if for each level-$(\\ell-1)$ estimation,\n\\[ \n \\rho_{i_1\\ldots i_{d-\\ell}j} \\leq \n c_{i_1\\ldots i_{d-\\ell}j} + \\sum_{l \\in N} |\\rho_{i_1\\ldots i_{d-\\ell} j l}| =\n c_{i_1\\ldots i_{d-\\ell}j} + \\rb_{i_1\\ldots i_{d-\\ell} j}\n\\]\nWe also note that we can easily enforce such upper bounds on the estimations computed by Algorithm~\\ref{alg:estimate}. Since each level-$\\ell$ absolute value estimation is defined as $\\rb_{i_1\\ldots i_{d-\\ell}} = \\sum_{j \\in N} |\\rho_{i_1\\ldots i_{d-\\ell}j}|$, we obtain that for any level $\\ell \\geq 2$,\n\\begin{eqnarray*}\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\rb_{i_1\\ldots i_{d-\\ell}} & \\leq &\n \\sum_{(i_1, \\ldots, i_{d-\\ell}, j) \\in N^{d-\\ell+1}} \\left(|c_{i_1\\ldots i_{d-\\ell}j}| +\n \\rb_{i_1\\ldots i_{d-\\ell} j} \\right)\\\\\n & \\leq & \\beta n^{d-1+\\delta} + (\\ell-1)\\beta n^{d-1+\\delta}\n = \\ell\\beta n^{d-1+\\delta}\n\\end{eqnarray*}\nFor the second inequality, we use the induction hypothesis and that since $p(\\vec{x})$ is $\\beta$-smooth and $\\delta$-bounded, the sum, over all monomials of degree $d-\\ell+1$, of the absolute values $|c_{i_1\\ldots i_{d-\\ell}j}|$ of their coefficients $c_{i_1\\ldots i_{d-\\ell}j}$ is at most $\\beta n^{d-1+\\delta}$. We also use the fact that the estimations are computed over the decomposition tree of the polynomial $p(\\vec{x})$. Hence, each coefficient $c_{i_1\\ldots i_{d-\\ell}j}$ is included only once in the sum.\n\\qed\\end{proof}\n\\begin{lemma}\\label{l:cum_est}\nLet $p(\\vec{x})$ be an $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial. Also let $\\tb_{i_1 \\ldots i_{d-\\ell}}$ be the cumulative absolute value estimations, for all levels $\\ell \\in \\{1, \\ldots, d-1\\}$ of the decomposition of $p(\\vec{x})$ and all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, corresponding to the estimations $\\rho_{i_1 \\ldots i_{d-\\ell}}$ computed by Algorithm~\\ref{alg:estimate} and used in ($d$-LP) and ($d$-IP). Then, \n\\begin{equation}\\label{eq:cum_est}\n \\sum_{j \\in N} \\tb_{j} \\leq d(d-1)\\beta n^{d-1+\\delta}\/2\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing induction on the level $\\ell$ of the decomposition and Lemma~\\ref{l:abs_est}, we show that for each level $\\ell \\geq 1$, the sum of the cumulative absolute value estimations is:\n\\begin{equation}\\label{eq:cum_est2}\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\tb_{i_1\\ldots i_{d-\\ell}} \\leq\n (\\ell+1)\\ell\\beta n^{d-1+\\delta}\/2\n\\end{equation}\nThe conclusion of the lemma is obtained by applying (\\ref{eq:cum_est2}) for the first level of the decomposition of $p(\\vec{x})$, i.e., for $\\ell = d-1$.\n\nFor the basis, we recall that for $\\ell = 1$, level-$1$ cumulative absolute value estimations are defined as\n\\( \\tb_{i_1 \\ldots i_{d-1}} = \\rb_{i_1 \\ldots i_{d-1}} \\). Using Lemma~\\ref{l:abs_est}, we obtain that:\n\\[ \\sum_{(i_1, \\ldots, i_{d-1}) \\in N^{d-1}} \\tb_{i_1\\ldots i_{d-1}} =\n \\sum_{(i_1, \\ldots, i_{d-1}) \\in N^{d-1}} \\rb_{i_1\\ldots i_{d-1}}\n \\leq \\beta n^{d-1+\\delta}\n\\]\nWe recall (see also Section~\\ref{s:pip_value}) that for each $\\ell \\geq 2$, level-$\\ell$ cumulative absolute value estimations are defined as\n\\( \\tb_{i_1 \\ldots i_{d-\\ell}} = \\rb_{i_1 \\ldots i_{d-\\ell}} + \\sum_{j \\in N} \\tb_{i_1 \\ldots i_{d-\\ell}j} \\). \nSumming up over all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, we obtain that for any level $\\ell \\geq 2$,\n\\begin{eqnarray*}\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\tb_{i_1\\ldots i_{d-\\ell}} & = &\n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\left( \\rb_{i_1\\ldots i_{d-\\ell}}\n +\n \\sum_{j \\in N} \\tb_{i_1 \\ldots i_{d-\\ell}j} \\right) \\\\\n & = & \n \\sum_{(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}} \\rb_{i_1\\ldots i_{d-\\ell}}\n + \\sum_{(i_1, \\ldots, i_{d-\\ell}, j) \\in N^{d-\\ell-1}} \\tb_{i_1 \\ldots i_{d-\\ell}j} \\\\\n & \\leq & \\ell \\beta n^{d-1+\\delta} + \\ell(\\ell-1)\\beta n^{d-1+\\delta}\/2 \n = (\\ell+1)\\ell\\beta n^{d-1+\\delta}\/2\\,,\n\\end{eqnarray*}\nwhere the inequality follows from Lemma~\\ref{l:abs_est} and from the induction hypothesis.\n\\qed\\end{proof}\n\\section{Introduction}\n\\label{s:intro}\n\nThe complexity of Constraint Satisfaction Problems (CSPs) has long played a\ncentral role in theoretical computer science and it quickly became evident that\nalmost all interesting CSPs are NP-complete \\cite{S78}. Thus, since\napproximation algorithms are one of the standard tools for dealing with NP-hard\nproblems, the question of approximating the corresponding optimization problems\n({\\sc Max}-CSP) has attracted significant interest over the years \\cite{T10}.\nUnfortunately, most CSPs typically resist this approach: not only are they\nAPX-hard \\cite{KSW97}, but quite often the best polynomial-time approximation\nratio we can hope to achieve for them is that guaranteed by a trivial random\nassignment \\cite{H01}. This striking behavior is often called\n\\emph{approximation resistance}.\n\nApproximation resistance and other APX-hardness results were originally\nformulated in the context of \\emph{polynomial-time} approximation. It would\ntherefore seem that one conceivable way for working around such barriers could\nbe to consider approximation algorithms running in super-polynomial time, and\nindeed super-polynomial approximation for NP-hard problems is a topic that has\nbeen gaining more attention in the literature recently\n\\cite{CLN13,BEP09,BCEP13,CKW09,CP10,CPW11}. Unfortunately, the existence of\nquasi-linear PCPs with small soundness error, first given in the work of\nMoshkovitz and Raz \\cite{MR10}, established that approximation resistance is a\nphenomenon that carries over even to \\emph{sub-exponential} time approximation,\nessentially ``killing'' this approach for CSPs. For instance, we now know that\nif, for any $\\eps>0$, there exists an algorithm for {\\sc Max}-3-SAT with ratio\n$7\/8+\\eps$ running in time $2^{n^{1-\\eps}}$ this would imply the existence of a\nsub-exponential \\emph{exact} algorithm for 3-SAT, disproving the Exponential\nTime Hypothesis (ETH). It therefore seems that sub-exponential time\ndoes not improve the approximability of CSPs, or put another way, for many CSPs\nobtaining a very good approximation ratio requires almost as much time as\nsolving the problem exactly.\n\nDespite this grim overall picture, many positive approximation results for CSPs\nhave appeared over the years, by taking advantage of the special structure of\nvarious classes of instances. One notable line of research in this vein is the\nwork on the approximability of \\emph{dense} CSPs, initiated by Arora, Karger\nand Karpinski \\cite{AKK99} and independently by de la Vega \\cite{V96}. The\ntheme of this set of results is that the problem of maximizing the number of\nsatisfied constraints in a CSP instance with arity $k$ (\\kCSP) becomes\nsignificantly easier if the instance contains $\\Omega(n^k)$ constraints. More\nprecisely, it was shown in \\cite{AKK99} that \\kCSP\\ admits a\n\\emph{polynomial-time approximation scheme} (PTAS) on dense instances, that is,\nan algorithm which for any constant $\\eps>0$ can in time polynomial in $n$\nproduce an assignment that satisfies $(1-\\eps)\\mathrm{OPT}$ constraints.\nSubsequent work produced a stream of positive\n\\cite{VK00,BVK03,AVKK03,CKSV12,CKSV11,FK96,AFK02,DFJ98,II05} (and some negative\n\\cite{VK99,AA07}) results on approximating CSPs which are in general APX-hard,\nshowing that dense instances form an island of tractability where many\noptimization problems which are normally APX-hard admit a PTAS.\n\n\\noindent\\textbf{Our contribution}: The main goal of this paper is to use the\nadditional power afforded by sub-exponential time to extend this island of\ntractability as much as possible. To demonstrate the main result, consider a\nconcrete CSP such as {\\sc Max}-3-SAT. As mentioned, we know that\nsub-exponential time does not in general help us approximate this problem: the\nbest ratio achievable in, say, $2^{\\sqrt{n}}$ time is still 7\/8. On the other\nhand, this problem admits a PTAS on instances with $\\Omega(n^3)$ clauses. This\ndensity condition is, however, rather strict, so the question we would like to\nanswer is the following: Can we efficiently approximate a larger (and more\nsparse) class of instances while using sub-exponential time?\n\nIn this paper we provide a positive answer to this question, not just for {\\sc\nMax}-3-SAT, but also for any \\kCSP\\ problem. Specifically, we show that for\nany constants $\\delta\\in (0,1]$, $\\eps>0$ and integer $k\\ge 2$, there is an\nalgorithm which achieves a $(1-\\eps)$ approximation of \\kCSP\\ instances with\n$\\Omega(n^{k-1+\\delta})$ constraints in time $2^{O(n^{1-\\delta}\\ln n\n\/\\eps^3)}$. A notable special case of this result is for $k=2$, where the input\ninstance can be described as a graph. For this case, which contains classical\nproblems such as \\MC, our algorithm gives an approximation scheme running in\ntime $2^{O(\\frac{n}{\\Delta}\\ln n\/\\eps^3)}$ for graphs with average degree\n$\\Delta$. In other words, this is an approximation scheme that runs in time\n\\emph{sub-exponential in $n$} even for almost sparse instances where the\naverage degree is $\\Delta = n^\\delta$ for some small $\\delta>0$. More\ngenerally, our algorithm provides a trade-off between the time available and\nthe density of the instances we can handle. For graph problems ($k=2$) this\ntrade-off covers the whole spectrum from dense to almost sparse instances,\nwhile for general \\kCSP, it covers instances where the number of constraints\nranges from $\\Theta(n^{k})$ to $\\Theta(n^{k-1})$.\n\n\\noindent\\textbf{Techniques}: The algorithms in this paper are an extension and\ngeneralization of the \\emph{exhaustive sampling} technique given by Arora,\nKarger and Karpinski \\cite{AKK99}, who introduced a framework of smooth\npolynomial integer programs to give a PTAS for dense \\kCSP. The basic idea of\nthat work can most simply be summarized for \\MC. This problem can be recast as\nthe problem of maximizing a quadratic function over $n$ boolean variables.\nThis is of course a hard problem, but suppose that we could somehow ``guess''\nfor each vertex how many of its neighbors belong in each side of the cut. This\nwould make the quadratic problem linear, and thus much easier. The main\nintuition now is that, if the graph is dense, we can take a sample of $O(\\log\nn)$ vertices and guess their partition in the optimal solution. Because every\nnon-sample vertex will have ``many'' neighbors in this sample, we can with high\nconfidence say that we can estimate the fraction of neighbors on each side for\nall vertices. The work of de la Vega \\cite{V96} uses exactly this algorithm\nfor \\MC, greedily deciding the vertices outside the sample. The work of\n\\cite{AKK99} on the other hand pushed this idea to its logical conclusion,\nshowing that it can be applied to degree-$k$ polynomial optimization problems,\nby recursively turning them into linear programs whose coefficients are\nestimated from the sample. The linear programs are then relaxed to produce\nfractional solutions, which can be rounded back into an integer solution to the\noriginal problem.\n\nOn a very high level, the approach we follow in this paper retraces the steps\nof \\cite{AKK99}: we formulate \\kCSP\\ as a degree-$k$ polynomial maximization\nproblem; we then recursively decompose the degree-$k$ polynomial problem into\nlower-degree polynomial optimization problems, estimating the coefficients by\nusing a sample of variables for which we try all assignments; the result of\nthis process is an integer linear program, for which we obtain a fractional\nsolution in polynomial time; we then perform randomized rounding to obtain an\ninteger solution that we can use for the original problem.\n\nThe first major difference between our approach and \\cite{AKK99} is of course\nthat we need to use a larger sample. This becomes evident if one considers \\MC\\\non graphs with average degree $\\Delta$. In order to get the sampling scheme to\nwork we must be able to guarantee that each vertex outside the sample has\n``many'' neighbors inside the sample, so we can safely estimate how many of\nthem end up on each side of the cut. For this, we need a sample of size at\nleast $n\\log n\/\\Delta$. Indeed, we use a sample of roughly this size, and exhausting\nall assignments to the sample is what dominates the running time of our\nalgorithm. As we argue later, not only is the sample size we use essentially\ntight, but more generally the running time of our algorithm is essentially\noptimal (under the ETH).\n\nNevertheless, using a larger sample is not in itself sufficient to extend the\nscheme of \\cite{AKK99} to non-dense instances. As observed in \\cite{AKK99} ``to\nachieve a multiplicative approximation for dense instances it suffices to\nachieve an additive approximation for the nonlinear integer programming\nproblem''. In other words, one of the basic ingredients of the analysis of\n\\cite{AKK99} is that additive approximation errors of the order $\\eps n^k$ can\nbe swept under the rug, because we know that in a dense instance the optimal\nsolution has value $\\Omega(n^k)$. This is \\emph{not} true in our case, and we\nare therefore forced to give a more refined analysis of the error of our\nscheme, independently bounding the error introduced in the first step\n(coefficient estimation) and the last (randomized rounding).\n\nA further complication arises when considering \\kCSP\\ for $k>2$. The scheme of\n\\cite{AKK99} recursively decomposes such dense instances into lower-order\npolynomials which retain the same ``good'' properties. This seems much harder\nto extend to the non-dense case, because intuitively if we start from a\nnon-dense instance the decomposition could end up producing some dense and\nsome sparse sub-problems. Indeed we present a scheme that approximates \\kCSP\\\nwith $\\Omega(n^{k-1+\\delta})$ constraints, but does not seem to extend to\ninstances with fewer than $n^{k-1}$ constraints. As we will see, there seems\nto be a fundamental complexity-theoretic justification explaining exactly why\nthis decomposition method cannot be extended further.\n\nTo ease presentation, we first give all the details of our scheme for the\nspecial case of \\MC\\ in Section \\ref{s:maxcut}. We then present the full\nframework for approximating \\emph{smooth polynomials} in Section \\ref{s:pip};\nthis implies the approximation result for \\kSAT\\ and more generally \\kCSP. We\nthen show in Section \\ref{s:kdense} that it is possible to extend our framework\nto handle \\kDense, a problem which can be expressed as the maximization of a\npolynomial subject to linear constraints. For this problem we obtain an\napproximation scheme which, given a graph with average degree $\\Delta=n^\\delta$\ngives a $(1-\\eps)$ approximation in time $2^{O(n^{1-\\delta\/3}\\ln n\/\\eps^3)}$.\nObserve that this extends the result of \\cite{AKK99} for this problem not only\nin terms of the density of the input instance, but also in terms of $k$ (the\nresult of \\cite{AKK99} required that $k=\\Omega(n)$).\n\n\\noindent\\textbf{Hardness}: What makes the results of this paper more\ninteresting is that we can establish that in many ways they are essentially\nbest possible, if one assumes the ETH. In particular, there are at least two\nways in which one may try to improve on these results further: one would be to\nimprove the running time of our algorithm, while another would be to extend the\nalgorithm to the range of densities it cannot currently handle. In Section\n\\ref{s:lower} we show that both of these approaches would face significant\nbarriers. Our starting point is the fact that (under ETH) it takes exponential\ntime to approximate \\MC\\ arbitrarily well on sparse instances, which is a\nconsequence of the existence of quasi-linear PCPs. By manipulating such \\MC\\\ninstances, we are able to show that for \\emph{any} average degree\n$\\Delta=n^{\\delta}$ with $\\delta<1$ the time needed to approximate \\MC\\\narbitrarily well almost matches the performance of our algorithm. Furthermore,\nstarting from sparse \\MC\\ instances, we can produce instances of \\kSAT\\ with\n$O(n^{k-1})$ clauses while preserving hardness of approximation. This gives a\ncomplexity-theoretic justification for our difficulties in decomposing \\kCSP\\\ninstances with less than $n^{k-1}$ constraints.\n\\section{Approximating the \\kDense\\ in Almost Sparse Graphs}\n\\label{s:kdense}\n\nIn this section, we show how an extension of the approximation algorithms we\nhave presented can be used to approximate the \\kDense\\ problem in\n$\\delta$-almost sparse graphs. Recall that this is a problem also handled in\n\\cite{AKK99}, but only for the case where $k=\\Omega(n)$. The reason that\nsmaller values of $k$ are not handled by the scheme of \\cite{AKK99} for dense\ngraphs is that when $k=o(n)$ the optimal solution has objective value much\nsmaller than the additive error of $\\eps n^2$ inherent in the scheme.\n\nHere we obtain a sub-exponential time approximation scheme that works on graphs\nwith $\\Omega(n^{1+\\delta})$ edges \\emph{for all} $k$ by judiciously combining\ntwo approaches: when $k$ is relatively large, we use a sampling approach\nsimilar to \\MC; when $k$ is small, we can resort to the na\\\"ive algorithm that\ntries all $n\\choose k$ possible solutions. We select (with some foresight) the\nthreshold between the two algorithms to be $k=\\Omega(n^{1-\\delta\/3})$, so that\nin the end we obtain an approximation scheme with running time of\n$2^{O(n^{1-\\delta\/3}\\ln n)}$, that is, slightly slower than the approximation\nscheme for \\MC. It is clear that the brute-force algorithm achieves this\nrunning time for $k=O(n^{1-\\delta\/3})$, so in the remainder we focus on the\ncase of large $k$.\n\nThe \\kDense\\ problem in a graph $G(V, E)$ is equivalent to maximizing, over all\nbinary vectors $\\vec{x} \\in \\{0, 1\\}^n$, the $n$-variate degree-$2$ $1$-smooth\npolynomial\n\\( p(\\vec{x}) = \\sum_{\\{i, j\\} \\in E} x_i x_j \\)\\,,\nunder the linear constraint $\\sum_{j \\in V} x_j = k$. Setting a variable $x_i$ to $1$ indicates that the vertex $i$ is included in the set $C$ that induces a dense subgraph $G[C]$ of $k$ vertices. Next, we assume that $G$ is $\\delta$-almost sparse and thus, has $m = \\Omega(n^{1+\\delta})$ edges. As usual, $\\vec{x}$ denotes the optimal solution.\n\nThe algorithm follows the same general approach and the same basic steps as the algorithm for \\MC\\ in Section~\\ref{s:maxcut}. In the following, we highlight only the differences. \n\n\\smallskip\\noindent{\\bf Obtaining Estimations by Exhaustive Sampling.} We first\nobserve that if $G$ is $\\delta$-almost sparse and $k = \\Omega(n^{1-\\delta\/3})$,\nthen a random subset of $k$ vertices contains $\\Omega(n^{1+\\delta\/3})$ edges in\nexpectation. Hence, we can assume that the optimal solution induces at least\n$\\Omega(n^{1+\\delta\/3})$ edges.\n\nWorking as in Section~\\ref{s:cut_sampling}, we use exhaustive sampling and\nobtain for each vertex $j \\in V$, an estimation $\\rho_j$ of $j$'s neighbors in\nthe optimal dense subgraph, i.e., $\\rho_j$ is an estimation of $\\hat{\\rho}_j =\n\\sum_{i \\in N} x_i^\\ast$. For the analysis, we apply Lemma~\\ref{l:cut_sampling}\nwith $n^{\\delta\/3}$, instead of $\\Delta$, or in other words, we use a sample of\nsize $\\Theta(n^{1-\\delta\/3}\\ln n)$. The reason is that we can only tolerate an\nadditive error of $\\eps n^{1+\\delta\/3}$, by the lower bound on the optimal\nsolution observed in the previous paragraph.\nThen, the running time due to exhaustive sampling is $ 2^{O(n^{1-\\delta\/3} \\ln\nn)}$. \n\nThus, by Lemma~\\ref{l:cut_sampling} and the discussion following it in\nSection~\\ref{s:cut_sampling}, we obtain that for all $\\e_1, \\e_2 > 0$, if we\nuse a sample of the size $\\Theta(n^{1-\\delta\/3}\\ln n \/(\\e^2_1 \\e_2))$, with\nprobability at least $1 - 2\/n^2$, the following holds for all estimations\n$\\rho_j$ and all vertices $j \\in V$:\n\\begin{equation}\\label{eq:dense_sample}\n (1-\\e_1)\\rho_j - \\e_2 n^{\\delta\/3} \\leq \\hat{\\rho}_j \\leq\n (1+\\e_1)\\rho_j + \\e_2 n^{\\delta\/3}\n\\end{equation}\n\\noindent{\\bf Linearizing the Polynomial.}\nApplying Proposition~\\ref{pr:decomposition}, we can write the polynomial $p(\\vec{x})$ as $p(\\vec{x}) = \\sum_{j \\in V} x_j p_j(\\vec{x})$, where $p_j(\\vec{x}) = \\sum_{i \\in N(j)} x_i$ is a degree-$1$ $1$-smooth polynomial that indicates how many neighbors of vertex $j$ are in $C$ in the solution corresponding to $\\vec{x}$. Then, using the estimations $\\rho_j$ of $\\sum_{i \\in N(j)} x^\\ast_i$\\,, obtained by exhaustive sampling, we have that approximate maximization of $p(\\vec{x})$ can be reduced to the solution of the following Integer Linear Program:\n\\begin{alignat*}{3}\n& &\\max \\sum_{j \\in V} &y_j \\rho_j & & \\tag{IP$'$}\\\\\n&\\mathrm{s.t.}\\quad &\n(1-\\e_1) \\rho_j - \\e_2 n^{\\delta\/3} \\leq \\sum_{i \\in N(j)} &y_i \\leq (1+\\e_1) \\rho_j + \\e_2 n^{\\delta\/3} \\quad & \\forall &j \\in V\\\\\n& & \\sum_{i \\in N(j)} &y_i = k \\\\\n& & &y_j \\in \\{0, 1\\} &\\forall & j \\in V\n\\end{alignat*}\nBy (\\ref{eq:dense_sample}), if the sample size is $|R| = \\Theta(n^{1-\\delta\/3}\\ln n\/(\\e^2_1 \\e_2))$, with probability at least $1-2\/n^2$, the densest subgraph $\\vec{x}^\\ast$ is a feasible solution to (IP$'$) with the estimations $\\rho_j$ obtained by restricting $\\vec{x}^\\ast$ to the vertices in $R$. In the following, we let (LP$'$) denote the Linear Programming relaxation of (IP$'$), where each $y_j \\in [0, 1]$.\n\n\\smallskip\\noindent{\\bf The Number of Edges in Feasible Solutions.}\nWe next show that the objective value of any feasible solution $\\vec{y}$ to (LP$'$) is close to $p(\\vec{y})$. Therefore, assuming that $\\vec{x}^\\ast$ is feasible, any good approximation to (IP$'$) is a good approximation to the densest subgraph. \n\\begin{lemma}\\label{l:dense_approx}\nLet $\\rho_1, \\ldots, \\rho_n$ be non-negative numbers and $\\vec{y}$ be any feasible solution to (LP\\,$'$). Then,\n\\begin{equation}\\label{eq:dense_approx}\n p(\\vec{y}) \\in (1\\pm\\e_1)\\sum_{j \\in V} y_j \\rho_j \\pm \\e_2 n^{1+\\delta\/3}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing the decomposition of $p(\\vec{y})$ and the formulation of (LP$'$), we obtain that:\n\\begin{align*}\n p(\\vec{y}) = \\sum_{j \\in V} y_j \\sum_{i \\in N(j)} y_i \\ & \\in\n \\sum_{j \\in V} y_j \\left((1\\pm \\e_1) \\rho_j \\pm \\e_2 n^{\\delta\/3}\\right) \\\\\n &= (1\\pm \\e_1)\\sum_{j \\in V} y_j \\rho_j \\pm \\e_2 n^{\\delta\/3} \\sum_{j \\in V} y_j \\\\\n &\\in (1\\pm \\e_1)\\sum_{j \\in V} y_j \\rho_j \\pm \\e_2 n^{1+\\delta\/3}\n\\end{align*}\nThe first inclusion holds because $\\vec{y}$ is feasible for (LP$'$) and thus, $\\sum_{i \\in N(j)} y_i \\in (1\\pm \\e_1)\\rho_j \\pm \\e_2n^{\\delta\/3}$, for all $j$. The second inclusion holds because $\\sum_{j \\in V} y_j \\leq n$.\n\\qed\\end{proof}\n\\noindent{\\bf Randomized Rounding of the Fractional Optimum.}\nAs a last step, we show how to round the fractional optimum $\\vec{y}^\\ast =\n(y^\\ast_1, \\ldots, y^\\ast_n)$ of (LP$'$) to an integral solution $\\vec{z} =\n(z_1, \\ldots, z_n)$ that almost satisfies the constraints of (IP$'$). To this\nend, we use randomized rounding, as for \\MC. We obtain that with probability at\nleast $1 - 2\/n^{8}$,\n\\begin{equation}\\label{eq:k_deviation}\n k - 2\\sqrt{n\\ln(n)} \\leq\n \\sum_{j \\in V} z_i \\leq\n k + 2\\sqrt{n\\ln(n)}\n\\end{equation}\nSpecifically, the inequality above follows from the Chernoff bound in footnote~\\ref{foot:chernoff}, with $t = 2\\sqrt{n \\ln(n)}$, since $\\Exp[\\sum_{i \\in N(j)} z_j] = k$. \nMoreover, applying Lemma~\\ref{l:rounding} with $q = 0$, $\\beta = 1$, $k = 7$, $\\delta\/3$ (instead of $\\delta$) and $\\alpha = \\max\\{ \\e_1, \\e_2\/2\\}$, and using that $\\vec{y}^\\ast$ is a feasible solution to (LP$'$) and that $\\e_1 \\in (0, 1)$, we obtain that with probability at least $1 - 2\/n^{8}$, for each vertex $j$,\n\\begin{equation}\\label{eq:z_deviation}\n (1-\\e_1)^2\\rho_j - 2\\e_2 n^{\\delta\/3} \\leq\n \\sum_{i \\in N(j)} z_i \\leq\n (1+\\e_1)^2\\rho_j + 2\\e_2 n^{\\delta\/3} \n\\end{equation}\nBy the union bound, the integral solution $\\vec{z}$ obtained from $\\vec{y}^\\ast$ by randomized rounding satisfies (\\ref{eq:k_deviation}) and (\\ref{eq:z_deviation}), for all vertices $j$, with probability at least $1 - 3\/n^7$.\n\nBy linearity of expectation, $\\Exp[ \\sum_{j \\in V} z_j \\rho_j ] = \\sum_{j \\in V} y^\\ast_j \\rho_j$. Moreover, since the probability that $\\vec{z}$ does not satisfy\neither (\\ref{eq:k_deviation}) or (\\ref{eq:z_deviation}), for some vertex $j$, is at most $3\/n^7$, and since the objective value of (IP$'$) is at most $n^2$, the expected value of a rounded solution $\\vec{z}$ that (\\ref{eq:k_deviation}) and (\\ref{eq:z_deviation}), for all vertices $j$, is least $\\sum_{j \\in V} y^\\ast_j \\rho_j - 1$ (assuming that $n \\geq 2$). As in \\MC, such an integral solution $\\vec{z}$ can be found in (deterministic) polynomial time using the method of conditional expectations (see \\cite{Rag88}). \n\nThe following is similar to Lemma~\\ref{l:dense_approx} and shows that the objective value $p(\\vec{z})$ of the rounded solution $\\vec{z}$ is close to the optimal value of (LP$'$). \n\\begin{lemma}\\label{l:dense_approx2}\nLet $\\vec{y}^\\ast$ be the optimal solution of (LP$'$) and let $\\vec{z}$ be the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding (and the method of conditional expectations). Then,\n\\begin{equation}\\label{eq:dense_approx2}\n p(\\vec{z}) \\in (1 \\pm \\e_1)^2 \\sum_{j \\in V} y^\\ast_j \\rho_j \\pm 3\\e_2 n^{1+\\delta\/3}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing the decomposition of $p(\\vec{y})$ and an argument similar to that in the proof of Lemma~\\ref{l:dense_approx}, we obtain that:\n\\begin{align*}\n p(\\vec{z}) = \\sum_{j \\in V} z_j \\sum_{i \\in N(j)} z_i \\ \\ & \\in \n \\sum_{j \\in V} z_j \\left((1\\pm \\e_1)^2 \\rho_j \\pm 2\\e_2 n^{\\delta\/3} \\right) \\\\\n &= (1\\pm \\e_1)^2 \\sum_{j \\in V} z_j \\rho_j \n \\pm 2 \\e_2 n^{\\delta\/3} \\sum_{j \\in V} z_j\\\\\n &\\in (1\\pm \\e_1)^2 \\sum_{j \\in V} z_j \\rho_j \\pm 2\\e_2 n^{1+\\delta\/3} \\\\\n &\\in (1\\pm \\e_1)^2 \\sum_{j \\in V} y^\\ast_j \\rho_j \\pm 3\\e_2 n^{1+\\delta\/3}\n\\end{align*}\nThe first inclusion holds because $\\vec{z}$ satisfies (\\ref{eq:z_deviation}) for all $j \\in V$. For the second inclusion, we use that $\\sum_{j \\in V} z_j \\leq n$. For the last inclusion, we recall that $\\sum_{j \\in V} z_j \\rho_j \\geq \\sum_{j \\in V} y^\\ast_j \\rho_j - 1$ and assume that $n$ is sufficiently large.\n\\qed \\end{proof}\n\\noindent{\\bf Putting Everything Together.}\nTherefore, for $\\eps > 0$, if $G$ is $\\delta$-almost sparse and $k =\n\\Omega(n^{1-\\delta\/3})$, the algorithm described computes estimations $\\rho_j$\nsuch that the densest subgraph $\\vec{x}^\\ast$ is a feasible solution to (IP$'$)\nwhp. Hence, by the analysis above, the algorithm computes a slightly\ninfeasible solution approximating the number of edges in the densest subgraph\nwith $k$ vertices within a multiplicative factor of $(1-\\e_1)^2$ and an\nadditive error of $\\e_2 n^{1+\\delta\/3}$. Setting $\\e_1 = \\e_2 = \\eps\/8$, the\nnumber of edges in the subgraph induced by $\\vec{z}$ satisfies the following\nwith probability at least $1-2\/n^2$\\,:\n\\[ p(\\vec{z}) \\geq (1-\\e_1)^2 \\sum_{j \\in V} y_j^\\ast \\rho_j - 3 \\e_2 n^{1+\\delta\/3} \\geq \n\t(1-\\e_1)^2 \\sum_{j \\in V} x_j^\\ast \\rho_j - 3 \\e_2 n^{1+\\delta\/3} \\geq\n\tp(\\vec{x}^\\ast) - \\eps n^{1+\\delta\/3} \\geq\n\t(1-\\eps) p(\\vec{x}^\\ast)\n\\]\nThe first inequality follows from Lemma~\\ref{l:dense_approx2}, the second\ninequality holds because $\\vec{y}^\\ast$ is the optimal solution to (LP) and\n$\\vec{x}^\\ast$ is feasible for (LP), the third inequality follows from\nLemma~\\ref{l:dense_approx} and the fourth inequality holds because the optimal\ncut has at least $\\Omega(n^{1+\\delta\/3})$ edges.\n\nThis solution is infeasible by at most $2\\sqrt{n \\ln n}=o(k)$ vertices and can\nbecome feasible by adding or removing at most so many vertices and\n$O(n^{1\/2+\\delta})$ edges. \n\\begin{theorem}\\label{th:densest}\nLet $G(V, E)$ be a $\\delta$-almost sparse graph with $n$ vertices. Then, for any integer $k \\geq 1$ and for any $\\eps > 0$, we can compute, in time $2^{O(n^{1-\\delta\/3} \\ln n\/\\eps^3)}$ and with probability at least $1-2\/n^2$, an induced subgraph $\\vec{z}$ of $G$ with $k$ vertices whose number of edges satisfies $p(\\vec{z}) \\geq (1-\\eps)p(\\vec{x}^\\ast)$, where $\\vec{x}^\\ast$ is the number of edges in the \\kDense\\ of $G$. \n\\end{theorem}\n\n\\section{Lower Bounds} \\label{s:lower}\n\n\nIn this section we give some lower bound arguments which show that the\nalgorithmic schemes we have presented are, in some senses, likely to be almost\noptimal. Our working complexity assumption will be the Exponential Time\nHypothesis (ETH), which states that there is no algorithm that can solve an\ninstance of 3-SAT of size $n$ in time $2^{o(n)}$.\n\nOur starting point is the following inapproximability result,\nwhich can be obtained using known PCP constructions and standard reductions.\n\\begin{theorem} \\label{thm:start}\nThere exist constants $c,s\\in[0,1]$ with $c>s$ such that for all $\\epsilon>0$\nwe have the following: if there exists an algorithm which, given an $n$-vertex\n$5$-regular instance of \\MC, can distinguish between the case where a solution\ncuts at least a $c$ fraction of the edges and the case where all solutions cut\nat most an $s$ fraction of the edges in time $2^{n^{1-\\epsilon}}$ then the ETH\nfails.\n\\end{theorem}\n\\begin{proof}\nThis inapproximability result follows from the construction of quasi-linear\nsize PCPs given, for example, in \\cite{Dinur05}. In particular, we use as\nstarting point a result explicitly formulated in \\cite{MR10} as follows:\n``Solving 3-\\textsc{SAT} on inputs of size $N$ can be reduced to distinguishing\nbetween the case that a 3CNF formula of size $N^{1+o(1)}$ is satisfiable and\nthe case that only $\\frac{7}{8} + o(1)$ fraction of its clauses are\nsatisfiable''.\n\nTake an arbitrary 3-\\textsc{SAT} instance of size $N$, which according to the\nETH cannot be solved in time $2^{o(N)}$. By applying the aforementioned PCP\nconstruction we obtain a 3CNF formula of size $N^{1+o(1)}$ which is either\nsatisfiable or far from satisfiable. Using standard constructions\n(\\cite{PY91,BK99}) we can reduce this formula to a $5$-regular graph $G(V,E)$\nwhich will be a \\MC\\ instance (we use degree $5$ here for concreteness, any\nreasonable constant would do). We have that $|V|$ is only a constant factor\napart from the size of the 3CNF formula. At the same time, there exist\nconstants $c,s$ such that, if the formula was satisfiable $G$ has a cut of\n$c|E|$ edges, while if the formula was far from satisfiable $G$ has no cut with\nmore than $s|E|$ edges. If there exists an algorithm that can distinguish\nbetween these two cases in time $2^{|V|^{1-\\epsilon}}$ the whole procedure\nwould run in $2^{N^{1-\\epsilon+o(1)}}$ and would allow us to decide if the\noriginal formula was satisfiable.\n\\qed\\end{proof}\nThere are two natural ways in which one may hope to improve or extend the\nalgorithms we have presented so far: relaxing the density requirement or\ndecreasing the running time. We prove in what follows that none of them can improve the results presented so far.\n\n\\subsection{Arity Higher Than Two}\n\nFirst, recall that the algorithm we have given for\n\\kCSP\\ works in the density range between $n^k$ and $n^{k-1}$. Here, we give a\nreduction establishing that it's unlikely that this can be improved.\n\\begin{theorem} \\label{thm:hard1}\nThere exists $r>1$ such that for all $\\epsilon>0$ and all (fixed) integers\n$k\\ge 3$ we have the following: if there exists an algorithm which approximates\n\\textsc{Max-$k$-SAT} on instances with $\\Omega(n^{k-1})$ clauses in time\n$2^{n^{1-\\epsilon}}$ then the ETH fails.\n\\end{theorem}\n\\begin{proof}\nConsider the \\MC\\ instance of Theorem \\ref{thm:start}, and transform it into a\n2-\\textsc{SAT} instance in the standard way: the set of variables is the set of\nvertices of the graph and for each edge $(u,v)$ we include the two clauses\n$(\\neg u \\lor v)$ and $(u\\lor \\neg v)$. This is an instance of 2-\\textsc{SAT}\nwith $n$ variables and $5n$ clauses and there exist constants $c,s$ such that\neither there exists an assignment satisfying a $c$ fraction of the clauses or\nall assignments satisfy at most an $s$ fraction of the clauses.\n\nFix a constant $k$ and introduce to the instance $(k-2)n$ new variables\n$x_{(i,j)}$, $i\\in\\{1,\\ldots,k-2\\}$, $j\\in\\{1,\\ldots,n\\}$. We perform the\nfollowing transformation to the 2-\\textsc{SAT} instance: for each clause\n$(l_1\\lor l_2)$ and for each tuple\n$(i_1,i_2,\\ldots,i_{k-2})\\in\\{1,\\ldots,n\\}^{k-2}$ we construct $2^{k-2}$ new\nclauses of size $k$. The first two literals of these clauses are always $l_1,\nl_2$. The remaining $k-2$ literals consist of the variables\n$x_{(1,i_1)},x_{(2,i_2)},\\ldots,x_{(k,i_{k-2})}$, where in each clause we pick\na different set of variables to be negated. In other words, to construct a\nclause of the new instance we select a clause of the original instance, one\nvariable from each of the $(k-2)$ groups of $n$ new variables, and a subset of\nthese variables that will be negated. The new instance consists of all the size\n$k$ clauses constructed in this way, for all possible choices.\n\nFirst, observe that the new instance has $5n^{k-1}2^k$ clauses and $(k-1)n$\nvariables, therefore, for each fixed $k$ it satisfies the density conditions of\nthe theorem. Furthermore, consider any assignment of the original formula. Any\nsatisfied clause has now been replaced by $2^k$ satisfied clauses, while for an\nunsatisfied clause any assignment to the new variables satisfies exactly\n$2^k-1$ clauses. Thus, for fixed $k$, there exist constants $s',c'$ such that\neither a $c'$ fraction of the clauses of the new instance is satisfiable or at\nmost a $s'$ fraction is. If there exists an approximation algorithm with ratio\nbetter than $c'\/s'$ running in time $2^{N^{1-\\epsilon}}$, where $N$ is the\nnumber of variables of the new instance, we could use it to decide the original\ninstance in a time bound that would disprove the ETH.\n\\qed\\end{proof}\n\n\\subsection{Almost Tight Time Bounds}\n\nA second possible avenue for improvement may be to consider potential speedups\nof our algorithms. Concretely, one may ask whether the (roughly)\n$2^{\\sqrt{n}\\ln n}$ running time guaranteed by our scheme for \\MC\\ on graphs\nwith average degree $\\sqrt{n}$ is best possible. We give an almost tight answer\nto such questions via the following theorem.\n\\begin{theorem} \\label{thm:hard2}\nThere exists $r>1$ such that for all $\\epsilon>0$ we have the following: if\nthere exists an algorithm which, for some $\\Delta=o(n)$, approximates \\MC\\ on\n$n$-vertex $\\Delta$-regular graphs in time $2^{(n\/\\Delta)^{1-\\epsilon}}$ then\nthe ETH fails.\n\\end{theorem}\n\\begin{proof}[Theorem \\ref{thm:hard2}]\nWithout loss of generality we prove the theorem for the case when the degree is\na multiple of 10.\n\nConsider an instance $G(V,E)$ of \\MC\\ as given by Theorem \\ref{thm:start}. Let\n$n=|V|$ and suppose that the desired degree is $d=10\\Delta$, where $\\Delta$ is\na function of $n$. We construct a graph $G'$ as follows: for each vertex $u\\in\nV$ we introduce $\\Delta$ new vertices $u_1,\\ldots,u_\\Delta$ as well as\n$5\\Delta$ ``consistency'' vertices $c^u_1,\\ldots,c^u_{5\\Delta}$. For every edge\n$(u,v)\\in E$ we add all edges $(u_i,v_j)$ for $i,j\\in\\{1,\\ldots,\\Delta\\}$.\nAlso, for every $u\\in V$ we add all edges $(u_i,c^u_j)$, for\n$i\\in\\{1,\\ldots,\\Delta\\}$ and $j\\in\\{1,\\ldots,5\\Delta\\}$. This completes the\nconstruction.\n\nThe graph we have constructed is $10\\Delta$-regular and is made up of $6\\Delta\nn$ vertices. Let us examine the size of its optimal cut. Consider an optimal\nsolution and observe that, for a given $u\\in V$ all the vertices $c^u_i$ can be\nassumed to be on the same side of the cut, since they all have the same\nneighbors. Furthermore, for a given $u\\in V$, all vertices $u_i$ can be assumed\nto be on the same side of the cut, namely on the side opposite that of $c^u_i$,\nsince the vertices $c^u_i$ are a majority of the neighborhood of each $u_i$.\nWith this observation it is easy to construct a one-to-one correspondence\nbetween cuts in $G$ and locally optimal cuts in $G'$.\n\nConsider now a cut that cuts $c|E|$ edges of $G$. If we set all $u_i$ of $G'$\non the same side as $u$ is placed in $G$ we cut $c|E|\\Delta^2$ edges of the\nform $(u_i,v_j)$. Furthermore, by placing the $c^u_i$ on the opposite side of\n$u_i$ we cut $5\\Delta^2 |V|$ edges. Thus the max cut of $G'$ is at least\n$c|E|\\Delta^2 + 5\\Delta^2 |V|$. Using the previous observations on locally\noptimal cuts of $G'$ we can conclude that if $G'$ has a cut with $s|E|\\Delta^2\n+ 5\\Delta^2|V|$ edges, then $G$ has a cut with $s|E|$ edges. Using the fact\nthat $2|E|=5|V|$ (since $G$ is 5-regular) we get a constant ratio between the\nsize of the cut of $G'$ in the two cases. Call that ratio $r$.\n\nSuppose now that we have an approximation algorithm with ratio better than $r$\nwhich, given an $N$-vertex $d$-regular graph runs in time\n$2^{(N\/d)^{1-\\epsilon}}$. Giving our constructed instance as input to this\nalgorithm would allow to decide the original instance in time\n$2^{n^{1-\\epsilon}}$.\n\\qed\\end{proof}\nTheorem \\ref{thm:hard2} establishes that our approach is essentially\noptimal, not just for average degree $\\sqrt{n}$, but for any other intermediate\ndensity.\n\\section{Approximating \\MC\\ in Almost Sparse Graphs}\n\\label{s:maxcut}\n\nIn this section, we apply our approach to \\MC, which serves as a convenient example and allows us to present the intuition and the main ideas.\n\nThe \\MC\\ problem in a graph $G(V, E)$ is equivalent to maximizing, over all binary vectors $\\vec{x} \\in \\{0, 1\\}^n$, the following $n$-variate degree-$2$ $2$-smooth polynomial\n\\[ p(\\vec{x}) = \\sum_{\\{i, j\\} \\in E} (x_i (1 - x_j) + x_j (1 - x_i)) \\]\nSetting a variable $x_i$ to $0$ indicates that the corresponding vertex $i$ is assigned to the left side of the cut, i.e., to $S_0$, and setting $x_i$ to $1$ indicates that vertex $i$ is assigned to the right side of the cut, i.e., to $S_1$.\nWe assume that $G$ is $\\delta$-almost sparse and thus, has $m = \\Omega(n^{1+\\delta})$ edges and average degree $\\Delta = \\Omega(n^\\delta)$.\nMoreover, if $m = \\Theta(n^{1+\\delta})$, $p(\\vec{x})$ is $\\delta$-bounded, since for each edge $\\{i, j\\} \\in E$, the monomial $x_ix_j$ appears with coefficient $-2$ in the expansion of $p$, and for each vertex $i \\in V$, the monomial $x_i$ appears with coefficient $\\deg(i)$ in the expansion of $p$. Therefore, for $\\ell \\in \\{1, 2\\}$, the sum of the absolute values of the coefficients of all monomials of degree $\\ell$ is at most $2m = O(n^{1+\\delta})$. \n\nNext, we extend and generalize the approach of \\cite{AKK99} and show how to $(1-\\eps)$-approximate the optimal cut, for any constant $\\eps > 0$, in time $2^{O(n\\ln n\/(\\Delta \\eps^3))}$ (see Theorem~\\ref{th:maxcut}). The running time is subexponential in $n$, if $G$ is $\\delta$-almost sparse. \n\n\\subsection{Outline and Main Ideas}\n\\label{s:cut_main}\n\nApplying Proposition~\\ref{pr:decomposition}, we can write the smooth polynomial $p(\\vec{x})$ as\n\\begin{equation}\\label{eq:cut_decomp}\np(\\vec{x}) = \\sum_{j \\in V} x_j (\\deg(j) - p_j(\\vec{x}))\\,,\n\\end{equation}\nwhere $p_j(\\vec{x}) = \\sum_{i \\in N(j)} x_i$ is a degree-$1$ $1$-smooth polynomial that indicates how many neighbors of vertex $j$ are in $S_1$ in the solution corresponding to $\\vec{x}$. The key observation, due to \\cite{AKK99}, is that if we have a good estimation $\\rho_j$ of the value of each $p_j$ at the optimal solution $\\vec{x}^\\ast$, then approximate maximization of $p(\\vec{x})$ can be reduced to the solution of the following Integer Linear Program:\n\\begin{alignat*}{3}\n& &\\max \\sum_{j \\in V} &y_j (\\deg(j) - \\rho_j) & & \\tag{IP}\\\\\n&\\mathrm{s.t.}\\quad &\n(1-\\e_1) \\rho_j - \\e_2 \\Delta \\leq \\sum_{i \\in N(j)} &y_i \\leq (1+\\e_1) \\rho_j + \\e_2 \\Delta \\quad & \\forall &j \\in V\\\\\n& & &y_j \\in \\{0, 1\\} &\\forall & j \\in V\n\\end{alignat*}\nThe constants $\\e_1, \\e_2 > 0$ and the estimations $\\rho_j \\geq 0$ are computed so that the optimal solution $\\vec{x}^\\ast$ is a feasible solution to (IP). We always assume wlog. that $0 \\leq \\sum_{i \\in N(j)} y_i \\leq \\deg(j)$, i.e., we let the lhs of the $j$-th constraint be $\\max\\{ (1-\\e_1) \\rho_j - \\e_2 \\Delta, 0 \\}$ and the rhs be $\\min\\{ (1+\\e_1) \\rho_j + \\e_2 \\Delta, \\deg(j) \\}$. Clearly, if $\\vec{x}^\\ast$ is a feasible solution to (IP), it remains a feasible solution after this modification. We let (LP) denote the Linear Programming relaxation of (IP), where each $y_j \\in [0, 1]$.\n\nThe first important observation is that for any $\\e_1, \\e_2 > 0$, we can compute estimations $\\rho_j$, by exhaustive sampling, so that $\\vec{x}^\\ast$ is a feasible solution to (IP) with high probability (see Lemma~\\ref{l:cut_sampling}). The second important observation is that the objective value of any feasible solution $\\vec{y}$ to (LP) is close to $p(\\vec{y})$ (see Lemma~\\ref{l:cut_approx}). Namely, for any feasible solution $\\vec{y}$, $\\sum_{j \\in V} y_j (\\deg(j) - \\rho_j) \\approx p(\\vec{y})$.\n\nBased on these observations, the approximation algorithm performs the following steps:\n\\begin{enumerate}\n\\item We guess a sequence of estimations $\\rho_1, \\ldots, \\rho_n$, by exhaustive sampling, so that $\\vec{x}^\\ast$ is a feasible solution to the resulting (IP) (see Section~\\ref{s:cut_sampling} for the details).\n\\item We formulate (IP) and find an optimal fractional solution $\\vec{y}^\\ast$ to (LP).\n\\item We obtain an integral solution $\\vec{z}$ by applying randomized rounding to $\\vec{y}^\\ast$ (and the method of conditional probabilities, as in \\cite{RT87,Rag88}).\n\\end{enumerate}\nTo see that this procedure indeed provides a good approximation to $p(\\vec{x}^\\ast)$, we observe that:\n\\begin{equation}\\label{eq:cut_est}\n p(\\vec{z}) \\approx \\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\approx\n \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) \\geq\n \\sum_{j \\in V} x^\\ast_j (\\deg(j) - \\rho_j) \\approx\n p(\\vec{x}^\\ast)\\,,\n\\end{equation}\nThe first approximation holds because $\\vec{z}$ is an (almost) feasible solution to (IP) (see Lemma~\\ref{l:cut_approx2}), the second approximation holds because the objective value of $\\vec{z}$ is a good approximation to the objective value of $\\vec{y}^\\ast$, due to randomized rounding, the inequality holds because $\\vec{x}^\\ast$ is a feasible solution to (LP) and the final approximation holds because $\\vec{x}^\\ast$ is a feasible solution to (IP).\n\nIn Sections~\\ref{s:cut_linearization}~and~\\ref{s:cut_rounding}, we make the notion of approximation precise so that $p(\\vec{z}) \\geq (1-\\eps) p(\\vec{x}^\\ast)$. As for the running time, it is dominated by the time required for the exhaustive-sampling step. Since we do not know $\\vec{x}^\\ast$, we need to run the steps (2) and (3) above for every sequence of estimations produced by exhaustive sampling. So, the outcome of the approximation scheme is the best of the integral solutions $\\vec{z}$ produced in step (3) over all executions of the algorithm. In Section~\\ref{s:cut_sampling}, we show that a sample of size $O(n \\ln n\/\\Delta)$ suffices for the computation of estimations $\\rho_j$ so that $\\vec{x}^\\ast$ is a feasible solution to (IP) with high probability. If $G$ is $\\delta$-almost sparse, the sample size is sublinear in $n$ and the running time is subexponential in $n$.\n\n\\subsection{Obtaining Estimations $\\rho_j$ by Exhaustive Sampling}\n\\label{s:cut_sampling}\n\nTo obtain good estimations $\\rho_j$ of the values $p_j(\\vec{x}^\\ast) = \\sum_{i \\in N(j)} x_i^\\ast$, i.e., of the number of $j$'s neighbors in $S_1$ in the optimal cut, we take a random sample $R \\subseteq V$ of size $\\Theta(n \\ln n \/ \\Delta)$ and try exhaustively all possible assignments of the vertices in $R$ to $S_0$ and $S_1$. If $\\Delta = \\Omega(n^\\delta)$, we have $2^{O(n\\ln n \/ \\Delta)} = 2^{O(n^{1-\\delta} \\ln n)}$ different assignments. For each assignment, described by a $0\/1$ vector $\\vec{x}$ restricted to $R$, we compute an estimation $\\rho_j = (n \/ |R|) \\sum_{i \\in N(j) \\cut R} x_i$, for each vertex $j \\in V$, and run the steps (2) and (3) of the algorithm above. Since we try all possible assignments, one of them agrees with $\\vec{x}^\\ast$ on all vertices of $R$. So, for this assignment, the estimations computed are $\\rho_j = (n \/ |R|) \\sum_{i \\in N(j) \\cut R} x^\\ast_i$. \nThe following shows that for these estimations, we have that $p_j(\\vec{x}^\\ast) \\approx \\rho_j$ with high probability.\n\\begin{lemma}\\label{l:cut_sampling}\nLet $\\vec{x}$ be any binary vector. For all $\\alpha_1, \\alpha_2 > 0$, we let $\\gamma = \\Theta(1\/(\\alpha^2_1 \\alpha_2))$ and let $R$ be a multiset of $r = \\gamma n \\ln n \/ \\Delta$ vertices chosen uniformly at random with replacement from $V$. For any vertex $j$, if $\\rho_j = (n \/ r) \\sum_{i \\in N(j) \\cut R} x_i$ and $\\hat{\\rho}_j = \\sum_{i \\in N(j)} x_i$, with probability at least $1 - 2\/n^{3}$,\n\\begin{equation}\\label{eq:cut_sample_cor}\n (1-\\alpha_1)\\hat{\\rho}_j - (1-\\alpha_1)\\alpha_2 \\Delta \\leq \\rho_j \\leq\n (1+\\alpha_1)\\hat{\\rho}_j + (1+\\alpha_1)\\alpha_2 \\Delta\n\\end{equation}\n\\end{lemma}\n\\begin{proofsketch} If $\\hat{\\rho}_j = \\Omega(\\Delta)$, the neighbors of $j$\nare well-represented in the random sample $R$ whp., because $|R| = \\Theta(n\\ln\nn\/\\Delta)$. Therefore, $|\\hat{\\rho}_j - \\rho_j| \\leq \\alpha_1\\hat{\\rho}_j$\nwhp., by Chernoff bounds. If $\\hat{\\rho}_j = o(\\Delta)$, the lower bound in\n(\\ref{eq:cut_sample_cor}) becomes trivial, since it is non-positive, while\n$\\rho_j \\geq 0$. As for the upper bound, we increase some $x_i$ to $x'_i \\in\n[0, 1]$, so that $\\hat{\\rho}'_j = \\alpha_2 \\Delta$. Then, $\\rho'_j \\leq\n(1+\\alpha_1)\\hat{\\rho}'_j = (1+\\alpha_1)\\alpha_2 \\Delta$ whp., by the same\nChernoff bound as above. Now the upper bound of (\\ref{eq:cut_sample_cor})\nfollows from $\\rho_j \\leq \\rho'_j$, which holds for any instantiation of the\nrandom sample $R$. The formal proof follows from Lemma~\\ref{l:sampling}, with\n$\\beta = 1$, $d = 2$ and $q = 0$, and with $\\Delta$ instead of $n^\\delta$.\n\\qed\\end{proofsketch}\nWe note that $\\rho_j \\geq 0$ and always assume that $\\rho_j \\leq \\deg(j)$, since if $\\rho_j$ satisfies (\\ref{eq:cut_sample_cor}), $\\min\\{ \\rho_j, \\deg(j) \\}$ also satisfies (\\ref{eq:cut_sample_cor}). For all $\\e_1, \\e_2 > 0$, setting $\\alpha_1 = \\frac{\\e_1}{1+\\e_1}$ and $\\alpha_2 = \\e_2$ in Lemma~\\ref{l:cut_sampling}, and taking the union bound over all vertices, we obtain that for $\\gamma = \\Theta(1\/(\\e^2_1 \\e_2))$, with probability at least $1 - 2\/n^2$, the following holds for all vertices $j \\in V$:\n\\begin{equation}\\label{eq:cut_sample}\n (1-\\e_1)\\rho_j - \\e_2 \\Delta \\leq \\hat{\\rho}_j \\leq\n (1+\\e_1)\\rho_j + \\e_2 \\Delta\n\\end{equation}\nTherefore, with probability at least $1-2\/n^2$, the optimal cut $\\vec{x}^\\ast$ is a feasible solution to (IP) with the estimations $\\rho_j$ obtained by restricting $\\vec{x}^\\ast$ to the vertices in $R$.\n\n\\subsection{The Cut Value of Feasible Solutions}\n\\label{s:cut_linearization}\n\nWe next show that the objective value of any feasible solution $\\vec{y}$ to (LP) is close to $p(\\vec{y})$. Therefore, assuming that $\\vec{x}^\\ast$ is feasible, any good approximation to (IP) is a good approximation to the optimal cut.\n\\begin{lemma}\\label{l:cut_approx}\nLet $\\rho_1, \\ldots, \\rho_n$ be non-negative numbers and $\\vec{y}$ be any feasible solution to (LP). Then,\n\\begin{equation}\\label{eq:cut_approx}\n p(\\vec{y}) \\in \\sum_{j \\in V} y_j (\\deg(j) - \\rho_j) \\pm 2(\\e_1 + \\e_2) m\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing (\\ref{eq:cut_decomp}) and the formulation of (LP), we obtain that:\n\\begin{align*}\n p(\\vec{y}) = \\sum_{j \\in V} y_j \\left(\\deg(j) - \\sum_{i \\in N(j)} y_i\\right) & \\in\n \\sum_{j \\in V} y_j \\left(\\deg(j) - ((1\\mp \\e_1) \\rho_j \\mp \\e_2 \\Delta) \\right) \\\\\n &= \\sum_{j \\in V} y_j (\\deg(j) - \\rho_j) \\pm \\e_1 \\sum_{j \\in V} y_j \\rho_j\n \\pm \\e_2 \\Delta \\sum_{j \\in V} y_j \\\\\n &\\in \\sum_{j \\in V} y_j (\\deg(j) - \\rho_j) \\pm 2(\\e_1 + \\e_2) m\n\\end{align*}\nThe first inclusion holds because $\\vec{y}$ is feasible for (LP) and thus, $\\sum_{i \\in N(j)} y_i \\in (1\\pm \\e_1)\\rho_j \\pm \\e_2\\Delta$, for all $j$. The third inclusion holds because\n\\[ \\sum_{j \\in V} y_j \\rho_j \\leq \\sum_{j \\in V} \\rho_j\n \\leq \\sum_{j \\in V} \\deg(j) = 2m\\,,\\]\nsince each $\\rho_j$ is at most $\\deg(j)$, and because $\\Delta \\sum_{j \\in V} y_j \\leq \\Delta n = 2m$.\n\\qed\\end{proof}\n\n\\subsection{Randomized Rounding of the Fractional Optimum}\n\\label{s:cut_rounding}\n\nAs a last step, we show how to round the fractional optimum $\\vec{y}^\\ast = (y^\\ast_1, \\ldots, y^\\ast_n)$ of (LP) to an integral solution $\\vec{z} = (z_1, \\ldots, z_n)$ that almost satisfies the constraints of (IP).\n\nTo this end, we use randomized rounding, as in \\cite{RT87}. In particular, we set independently each $z_j$ to $1$, with probability $y_j^\\ast$, and to $0$, with probability $1-y_j^\\ast$. By Chernoff bounds%\n\\footnote{\\label{foot:chernoff}We use the following standard Chernoff bound (see e.g., \\cite[Theorem~1.1]{DP09}): Let $Y_1, \\ldots, Y_k$ independent random variables in $[0, 1]$ and let $Y = \\sum_{j=1}^k Y_j$. Then for all $t > 0$, $\\Prob[|Y - \\Exp[Y]| > t] \\leq 2\\exp(-2t^2\/k)$.},\nwe obtain that with probability at least $1 - 2\/n^{8}$, for each vertex $j$,\n\\begin{equation}\\label{eq:deviation}\n (1-\\e_1)\\rho_j - \\e_2\\Delta - 2\\sqrt{\\deg(j)\\ln(n)} \\leq\n \\sum_{i \\in N(j)} z_i \\leq\n (1+\\e_1)\\rho_j + \\e_2\\Delta + 2\\sqrt{\\deg(j)\\ln(n)}\n\\end{equation}\nSpecifically, the inequality above follows from the Chernoff bound in footnote~\\ref{foot:chernoff}, with $k = \\deg(j)$ and $t = 2\\sqrt{\\deg(j)\\ln(n)}$, since $\\Exp[\\sum_{i \\in N(j)} z_j] = \\sum_{i \\in N(j)} y^\\ast_j \\in (1\\pm\\e_1)\\rho_j \\pm \\e_2\\Delta$. By the union bound, (\\ref{eq:deviation}) is satisfied with probability at least $1 - 2\/n^7$ for all vertices $j$.\n\nBy linearity of expectation, $\\Exp[ \\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) ] = \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j)$. Moreover, since the probability that $\\vec{z}$ does not satisfy (\\ref{eq:deviation}) for some vertex $j$ is at most $2\/n^7$ and since the objective value of (IP) is at most $n^2$, the expected value of a rounded solution $\\vec{z}$ that satisfies (\\ref{eq:deviation}) for all vertices $j$ is least $\\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) - 1$ (assuming that $n \\geq 2$). Using the method of conditional expectations, as in \\cite{Rag88}, we can find in (deterministic) polynomial time an integral solution $\\vec{z}$ that satisfies (\\ref{eq:deviation}) for all vertices $j$ and has $\\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\geq \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) - 1$. Next, we sometimes abuse the notation and refer to such an integral solution $\\vec{z}$ (computed deterministically) as the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding.\n\nThe following is similar to Lemma~\\ref{l:cut_approx} and shows that the objective value $p(\\vec{z})$ of the rounded solution $\\vec{z}$ is close to the optimal value of (LP).\n\\begin{lemma}\\label{l:cut_approx2}\nLet $\\vec{y}^\\ast$ be the optimal solution of (LP) and let $\\vec{z}$ be the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding (and the method of conditional expectations). Then,\n\\begin{equation}\\label{eq:cut_approx2}\n p(\\vec{z}) \\in \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) \\pm 3(\\e_1 + \\e_2) m\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nUsing (\\ref{eq:deviation}) and an argument similar to that in the proof of Lemma~\\ref{l:cut_approx}, we obtain that:\n\\begin{align*}\n p(\\vec{z}) & = \\sum_{j \\in V} z_j \\left(\\deg(j) - \\sum_{i \\in N(j)} z_i\\right) \\\\\n & \\in \\sum_{j \\in V} z_j \\left(\\deg(j) - \\left((1\\mp \\e_1) \\rho_j \\mp \\e_2 \\Delta \\mp 2\\sqrt{\\deg(j)\\ln(n)}\\right) \\right) \\\\\n &= \\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\pm \\e_1 \\sum_{j \\in V} z_j \\rho_j\n \\pm \\e_2 \\Delta \\sum_{j \\in V} z_j \\pm 2\\sum_{j \\in V} z_j \\sqrt{\\deg(j)\\ln(n)}\\\\\n &\\in \\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\pm (3\\e_1 + 2\\e_2) m \\\\\n &\\in \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) \\pm 3(\\e_1 + \\e_2) m \\\\\n\\end{align*}\nThe first inclusion holds because $\\vec{z}$ satisfies (\\ref{eq:deviation}) for all $j \\in V$. For the third inclusion, we use that $\\sum_{j \\in V} z_j \\rho_j \\leq \\sum_{j \\in V} \\deg(j) = 2m$, that $\\Delta \\sum_{i \\in V} z_i \\leq \\Delta n = 2m$ and that by Jensen's inequality,\n\\[\n 2 \\sum_{j \\in V} z_j \\sqrt{\\deg(j) \\ln n} \\leq\n \\sum_{j \\in V} \\sqrt{4\\,\\deg(j) \\ln n} \\leq\n \\sqrt{8 m n \\ln n} \\leq \\e_1 m\\,,\n\\]\nassuming that $n$ and $m = \\Omega(n^{1+\\delta})$ are sufficiently large. For the last inclusion, we recall that $\\sum_{j \\in V} z_j (\\deg(j) - \\rho_j) \\geq \\sum_{j \\in V} y^\\ast_j (\\deg(j) - \\rho_j) - 1$ and assume that $m$ is sufficiently large.\n\\qed\\end{proof}\n\n\\subsection{Putting Everything Together}\n\\label{s:together}\n\nTherefore, for any $\\eps > 0$, if $G$ is $\\delta$-almost sparse and $\\Delta = n^{\\delta}$, the algorithm described in Section~\\ref{s:cut_main}, with sample size $\\Theta(n \\ln n \/ (\\eps^3 \\Delta))$, computes estimations $\\rho_j$ such that the optimal cut $\\vec{x}^\\ast$ is a feasible solution to (IP) whp. Hence, by the analysis above, the algorithm approximates the value of the optimal cut $p(\\vec{x}^\\ast)$ within an additive term of $O(\\eps m)$. Specifically, setting $\\e_1 = \\e_2 = \\eps\/16$, the value of the cut $\\vec{z}$ produced by the algorithm satisfies the following with probability at least $1-2\/n^2$\\,:\n\\[ p(\\vec{z}) \\geq \\sum_{j \\in V} y_j^\\ast (\\deg(j) - \\rho_j) - 3 \\eps m\/8\n \\geq \\sum_{j \\in V} x_j^\\ast (\\deg(j) - \\rho_j) - 3 \\eps m\/8\n \\geq p(\\vec{x}^\\ast) - \\eps m \/ 2 \\geq (1-\\eps) p(\\vec{x}^\\ast)\n\\]\nThe first inequality follows from Lemma~\\ref{l:cut_approx2}, the second inequality holds because $\\vec{y}^\\ast$ is the optimal solution to (LP) and $\\vec{x}^\\ast$ is feasible for (LP), the third inequality follows from Lemma~\\ref{l:cut_approx} and the fourth inequality holds because the optimal cut has at least $m\/2$ edges. \n\\begin{theorem}\\label{th:maxcut}\nLet $G(V, E)$ be a $\\delta$-almost sparse graph with $n$ vertices. Then, for any $\\eps > 0$, we can compute, in time $2^{O(n^{1-\\delta} \\ln n\/\\eps^3)}$ and with probability at least $1-2\/n^2$, a cut $\\vec{z}$ of $G$ with value $p(\\vec{z}) \\geq (1-\\eps)p(\\vec{x}^\\ast)$, where $\\vec{x}^\\ast$ is the optimal cut.\n\\end{theorem}\n\\section{Approximate Maximization of Smooth Polynomials}\n\\label{s:pip}\n\nGeneralizing the ideas applied to \\MC, we arrive at the main algorithmic result\nof the paper: an algorithm to approximately optimize $\\beta$-smooth\n$\\delta$-bounded polynomials $p(\\vec{x})$ of degree $d$ over all binary vectors\n$\\vec{x} \\in \\{0, 1\\}^n$. The intuition and the main ideas are quite similar\nto those in Section~\\ref{s:maxcut}, but the details are significantly more\ninvolved because we are forced to recursively decompose degree $d$ polynomials\nto eventually obtain a linear program. \nIn what follows, we take care of the technical details.\n\n\nNext, we significantly generalize the ideas applied to \\MC\\ so that we approximately optimize $\\beta$-smooth $\\delta$-bounded polynomials $p(\\vec{x})$ of degree $d$ over all binary vectors $\\vec{x} \\in \\{0, 1\\}^n$. The structure of this section deliberately parallels the structure of Section~\\ref{s:maxcut}, so that the application to \\MC\\ can always serve as a reference for the intuition behind the generalization.\n\nAs in \\cite{AKK99} (and as explained in Section~\\ref{s:prelim}), we exploit the fact that any $n$-variate degree-$d$ $\\beta$-smooth polynomial $p(\\vec{x})$ can be decomposed into $n$ degree-$(d-1)$ $\\beta$-smooth polynomials $p_j(\\vec{x})$ such that $p(\\vec{x}) = c + \\sum_{j \\in N} x_j p_j(\\vec{x})$ (Proposition~\\ref{pr:decomposition}).\nFor smooth polynomials of degree $d \\geq 3$, we apply Proposition~\\ref{pr:decomposition} recursively until we end up with smooth polynomials of degree $1$.\nSpecifically, using Proposition~\\ref{pr:decomposition}, we further decompose each degree-$(d-1)$ $\\beta$-smooth polynomial $p_{i_1}(\\vec{x})$ into $n$ degree-$(d-2)$ $\\beta$-smooth polynomials $p_{i_1 j}(\\vec{x})$ such that\n$p_{i_1}(\\vec{x}) = c_{i_1} + \\sum_{j \\in N} x_j p_{i_1 j}(\\vec{x})$, etc.\nAt the basis of the recursion, at depth $d-1$, we have $\\beta$-smooth polynomials $p_{i_1\\ldots i_{d-1}}(\\vec{x})$ of degree $1$, one for each $(d-1)$-tuple of indices $(i_1, \\ldots, i_{d-1}) \\in N^{d-1}$. These polynomials are written as\n\\[ p_{i_1\\ldots i_{d-1}}(\\vec{x}) = c_{i_1\\ldots i_{d-1}} +\n \\sum_{j \\in N} x_j c_{i_1\\ldots i_{d-1} j}\\,,\n\\]\nwhere $c_{i_1\\ldots i_{d-1} j}$ are constants (these are the coefficients of the corresponding degree-$d$ monomials in the expansion of $p(\\vec{x})$). Due to $\\beta$-smoothness, $|c_{i_1\\ldots i_{d-1} j}| \\leq \\beta$ and $|c_{i_1\\ldots i_{d-1}}| \\leq \\beta n$. Inductively, $\\beta$-smoothness implies that each polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ of degree $\\ell \\geq 1$ in this decomposition%\n\\footnote{This decomposition can be performed in a unique way if we insist that $i_1 < i_2 < \\cdots < i_{d-1}$, but this is not important for our analysis.}\nhas $|p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})| \\leq (\\ell+1) \\beta n^{\\ell}$ for all binary vectors $\\vec{x} \\in \\{0, 1\\}^n$. Such a decomposition of $p(\\vec{x})$ in $\\beta$-smooth polynomials of degree $d-1, d-2, \\ldots, 1$ can be computed recursively in time $O(n^d)$.\n\n\\subsection{Outline and General Approach}\n\\label{s:pip_outline}\n\nAs in Section~\\ref{s:maxcut} (and as in \\cite{AKK99}), we observe that if we have good estimations $\\rho_{i_1\\ldots i_{d-\\ell}}$ of the values of each degree-$\\ell$ polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ at the optimal solution $\\vec{x}^\\ast$, for each level $\\ell = 1, \\ldots, d-1$ of the decomposition, then approximate maximization of $p(\\vec{x})$ can be reduced to the solution of the following Integer Linear Program:\n\\begin{align}\n\\max \\sum_{j \\in N} y_j \\rho_j \\tag{$d$-IP}\\\\\n\\mathrm{s.t.}\\ \\ \\ \\ \\ \\ \\\nc_{i_1} + \\sum_{j\\in N} y_j \\rho_{i_1 j} & \\in\n \\rho_{i_1} \\pm \\e_1 \\rb_{i_1} \\pm \\e_2 n^{d-1+\\delta} &\n \\forall i_1 \\in N \\notag \\\\\nc_{i_1i_2} + \\sum_{j\\in N} y_j \\rho_{i_1 i_2 j} & \\in\n\\rho_{i_1 i_2} \\pm \\e_1 \\rb_{i_1 i_2} \\pm \\e_2 n^{d-2+\\delta} &\n\\forall (i_1, i_2) \\in N \\times N \\notag \\\\\n\\cdots \\notag\\\\\nc_{i_1 \\ldots i_{d-\\ell}} + \\sum_{j\\in N} y_j \\rho_{i_1 \\ldots i_{d-\\ell} j} & \\in\n\\rho_{i_1 \\ldots i_{d-\\ell}} \\pm \\e_1 \\rb_{i_1 \\ldots i_{d-\\ell}}\n\\pm \\e_2 n^{d-\\ell+\\delta} &\n\\forall (i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell} \\notag \\\\\n\\cdots \\notag\\\\\nc_{i_1 \\ldots i_{d-1}} + \\sum_{j\\in N} y_j c_{i_1 \\ldots i_{d-1} j} & \\in\n\\rho_{i_1 \\ldots i_{d-1}} \\pm \\e_1 \\rb_{i_1 \\ldots i_{d-1}}\\pm \\e_2 n^{\\delta} &\n\\forall (i_1, \\ldots, i_{d-1}) \\in N^{d-1} \\notag \\\\\ny_j & \\in \\{0, 1\\} & \\forall j \\in N \\notag\n\\end{align}\nIn ($d$-IP), we also use \\emph{absolute value estimations} $\\rb_{i_1 \\ldots i_{d-\\ell}}$. For each level $\\ell \\geq 1$ of the decomposition of $p(\\vec{x})$ and each tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, we define the corresponding absolute value estimation as $\\rb_{i_1 \\ldots i_{d-\\ell}} = \\sum_{j \\in N} |\\rho_{i_1 \\ldots i_{d-\\ell}j}|$. Namely, each absolute value estimation $\\rb_{i_1 \\ldots i_{d-\\ell}}$ at level $\\ell$ is the sum of the absolute values of the estimations $\\rho_{i_1 \\ldots i_{d-\\ell}j}$ at level $\\ell-1$.\nThe reason that we use absolute value estimations and set the lhs\/rhs of the constraints to $\\rho_{i_1 \\ldots i_{d-\\ell}} \\pm \\e_1 \\rb_{i_1 \\ldots i_{d-\\ell}}$, instead of simply to $(1\\pm\\e_1)\\rho_{i_1 \\ldots i_{d-\\ell}}$, is that we want to consider linear combinations of positive and negative estimations $\\rho_{i_1 \\ldots i_{d-\\ell}}$ in a uniform way.\n\n\nSimilarly to Section~\\ref{s:maxcut}, the estimations $\\rho_{i_1 \\ldots i_{d-\\ell}}$ (and $\\rb_{i_1 \\ldots i_{d-\\ell}}$) are computed (by exhaustive sampling) and the constants $\\e_1, \\e_2 > 0$ are calculated so that the optimal solution $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP). In the following, we let $\\vec{\\rho}$ denote the sequence of estimations $\\rho_{i_1 \\ldots i_{d-\\ell}}$, for all levels $\\ell$ and all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, that we use to formulate ($d$-IP). The absolute value estimations $\\rb_{i_1 \\ldots i_{d-\\ell}}$ can be easily computed from $\\vec{\\rho}$. We let ($d$-LP) denote the Linear Programming relaxation of ($d$-IP), where each $y_j \\in [0, 1]$, let $\\vec{x}^\\ast$ denote the binary vector that maximizes $p(\\vec{x})$, and let $\\vec{y}^\\ast \\in [0,1]^n$ denote the fractional optimal solution of ($d$-LP).\n\nAs in Section~\\ref{s:maxcut}, the approach is based on the facts that (i) for all constants $\\e_1, \\e_2 > 0$, we can compute estimations $\\vec{\\rho}$, by exhaustive sampling, so that $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP) with high probability (see Lemma~\\ref{l:sampling} and Lemma~\\ref{l:sampling_gen}); and that (ii) the objective value of any feasible solution $\\vec{y}$ to ($d$-LP) is close to $p(\\vec{y})$ (see Lemma~\\ref{l:approx} and Lemma~\\ref{l:approx_gen}). Based on these observations, the general description of the approximation algorithm is essentially identical to the three steps described in Section~\\ref{s:cut_main} and the reasoning behind the approximation guarantee is that of (\\ref{eq:cut_est}).\n\n\\subsection{Obtaining Estimations by Exhausting Sampling}\n\\label{s:pip_sampling}\n\nWe first show how to use exhaustive sampling and obtain an estimation $\\rho_{i_1\\ldots i_{d-\\ell}}$ of the value at the optimal solution $\\vec{x}^\\ast$ of each degree-$\\ell$ polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ in the decomposition of $p(\\vec{x})$.\n\nAs in Section~\\ref{s:cut_sampling}, we take a sample $R$ from $N$, uniformly at random and with replacement. The sample size is $r = \\Theta(n^{1-\\delta} \\ln n)$. We try exhaustively all $0\/1$ assignments to the variables in $R$, which can performed in time $2^r = 2^{O(n^{1-\\delta}\\ln n)}$.\n\n\\def\\mathrm{Estimate}{\\mathrm{Estimate}}\n\\begin{algorithm}[t]\n\\caption{\\label{alg:estimate}Recursive estimation procedure $\\mathrm{Estimate}(p_{i_1\\ldots i_{d-\\ell}}(\\vec{x}), \\ell, R, \\vec{s})$}\n\\begin{algorithmic}\\normalsize\n \\Require $n$-variate degree-$\\ell$ polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$, $R \\subseteq N$ and a value $s_j \\in \\{0,1\\}$ for each $j \\in R$\n \\Ensure Estimation $\\rho_{i_1\\ldots i_{d-\\ell}}$ of $p_{i_1\\ldots i_{d-\\ell}}(\\overline{\\vec{s}})$, where $\\overline{\\vec{s}}_R = \\vec{s}$\n\n \\medskip\\If{$\\ell = 0$} \\Return $c_{i_1\\ldots i_{d}}$\n \\ \\ \\ \/* $p_{i_1\\ldots i_{d}}(\\vec{x})$ is equal to the constant $c_{i_1\\ldots i_{d}}$ *\/ \\EndIf\n \\State compute decomposition\n $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x}) =\n c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} x_j p_{i_1\\ldots i_{d-\\ell}j}(\\vec{x})$\n \\For{all $j \\in N$}\n \\State $\\rho_{i_1\\ldots i_{d-\\ell}j} \\leftarrow \\mathrm{Estimate}(p_{i_1\\ldots i_{d-\\ell}j}(\\vec{x}), \\ell-1, R, \\vec{s})$\n \\EndFor\n \\State $\\rho_{i_1\\ldots i_{d-\\ell}} \\leftarrow c_{i_1\\ldots i_{d-\\ell}} + \\frac{|N|}{|R|} \\sum_{j \\in R} s_j \\rho_{i_1\\ldots i_{d-\\ell}j}$\\\\\n \\Return $\\rho_{i_1\\ldots i_{d-\\ell}}$\n\\end{algorithmic}\\end{algorithm}\n\nFor each assignment, described by a $0\/1$ vector $\\vec{s}$ restricted to $R$,\nwe compute the corresponding estimations recursively, as described in Algorithm~\\ref{alg:estimate}. Specifically, for the basis level $\\ell = 0$ and each $d$-tuple $(i_1, \\ldots, i_d) \\in N^d$ of indices, the corresponding estimation is the coefficient $c_{i_1\\ldots i_d}$ of the monomial $x_{i_1}\\cdots x_{i_d}$ in the expansion of $p(\\vec{x})$.\nFor each level $\\ell$, $1 \\leq \\ell \\leq d-1$, and each $(d-\\ell)$-tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, given the level-$(\\ell-1)$ estimations $\\rho_{i_1\\ldots i_{d-\\ell} j}$ of $p_{i_1\\ldots i_{d-\\ell} j}(\\overline{\\vec{s}})$, for all $j \\in N$, we compute the level-$\\ell$ estimation $\\rho_{i_1\\ldots i_{d-\\ell}}$ of $p_{i_1\\ldots i_{d-\\ell}}(\\overline{\\vec{s}})$ from $\\vec{s}$ as follows:\n\\begin{equation}\\label{eq:estimation}\n \\rho_{i_1\\ldots i_{d-\\ell}} = c_{i_1\\ldots i_{d-\\ell}} +\n \\frac{n}{r} \\sum_{j \\in R} s_j \\rho_{i_1\\cdots i_{d-\\ell} j}\n\\end{equation}\nIn Algorithm~\\ref{alg:estimate}, $\\overline{\\vec{s}}$ is any vector in $\\{ 0, 1 \\}^n$ that agrees with $\\vec{s}$ on the variables of $R$. Given the estimations $\\rho_{i_1 \\ldots i_{d-\\ell}j}$, for all $j \\in N$, we can also compute the absolute value estimations $\\rb_{i_1 \\ldots i_{d-\\ell}}$ at level $\\ell$. Due to the $\\beta$-smoothness property of $p(\\vec{x})$, we have that $|c_{i_1\\ldots i_{d-\\ell}}| \\leq \\beta n^\\ell$, for all levels $\\ell \\geq 0$. Moreover, we assume that $0 \\leq \\rb_{i_1\\ldots i_{d-\\ell}} \\leq \\ell\\beta n^{\\ell}$ and $|\\rho_{i_1\\ldots i_{d-\\ell}}| \\leq (\\ell+1)\\beta n^{\\ell}$, for all levels $\\ell \\geq 1$. This assumption is wlog. because due to $\\beta$-smoothness, any binary vector $\\vec{x}$ is feasible for ($d$-IP) with such values for the estimations $\\rho_{i_1\\ldots i_{d-\\ell}}$ and the absolute value estimations $\\rb_{i_1\\ldots i_{d-\\ell}}$\\,.\n\\begin{remark}\nFor simplicity, we state Algorithm~\\ref{alg:estimate} so that it computes, from $\\vec{s}$, an estimation $\\rho_{i_1\\ldots i_{d-\\ell}}$ of the value of a given degree-$\\ell$ polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ at $\\overline{\\vec{s}}$. So, we need to apply Algorithm~\\ref{alg:estimate} $O(n^{d-1})$ times, one for each polynomial that arises in the recursive decomposition, with the same sample $R$ and the same assignment $\\vec{s}$. We can easily modify Algorithm~\\ref{alg:estimate} so that a single call $\\mathrm{Estimate}(p(\\vec{x}), d, R, \\vec{s})$ computes the estimations of all the polynomials that arise in the recursive decomposition of $p(\\vec{x})$. Thus, we save a factor of $d$ on the running time. The running time of the simple version is $O(dn^d)$, while the running time of the modified version is $O(n^d)$.\n\\end{remark}\n\n\\subsection{Sampling Lemma}\n\\label{s:sampling}\n\nWe use the next lemma to show that if $\\vec{s} = \\vec{x}^\\ast_R$, the estimations $\\rho_{i_1\\ldots i_{d-\\ell}}$ computed by Algorithm~\\ref{alg:estimate} are close to $c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} x^\\ast_j \\rho_{i_1\\ldots i_{d-\\ell} j}$ with high probability.\n\\begin{lemma}\\label{l:sampling}\nLet $\\vec{x}$ be any binary vector and let $( \\rho_j )_{j \\in N}$ be any sequence such that for some integer $q \\geq 0$ and some constant $\\beta \\geq 1$, $\\rho_j \\in [0, (q+1)\\beta n^q]$, for all $j \\in N$. For all integers $d \\geq 1$ and for all $\\alpha_1, \\alpha_2 > 0$, we let $\\gamma = \\Theta(d q \\beta\/(\\alpha_1^2 \\alpha_2))$ and let $R$ be a multiset of $r = \\gamma n^{1-\\delta} \\ln n$ indices chosen uniformly at random with replacement from $N$, where $\\delta \\in (0, 1]$ is any constant. If $\\rho = (n \/ r) \\sum_{j \\in R} \\rho_{j} x_j$ and $\\hat{\\rho} = \\sum_{j \\in N} \\rho_{j} x_j$, with probability at least $1 - 2\/n^{d+1}$,\n\\begin{equation}\\label{eq:pip_sample}\n (1-\\alpha_1)\\hat{\\rho} - (1-\\alpha_1)\\alpha_2 n^{q+\\delta} \\leq \\rho \\leq\n (1+\\alpha_1)\\hat{\\rho} + (1+\\alpha_1)\\alpha_2 n^{q+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nTo provide some intuition, we observe that if $\\hat{\\rho} = \\Omega(n^{q+\\delta})$, we have $\\Omega(n^\\delta)$ values $\\rho_j = \\Theta(n^q)$. These values are well-represented in the random sample $R$, with high probability, since the size of the sample is $\\Theta(n^{1-\\delta} \\ln n)$. Therefore, $|\\hat{\\rho} - \\rho| \\leq \\alpha_1\\hat{\\rho}$, with high probability, by standard Chernoff bounds. If $\\hat{\\rho} = o(n^{q+\\delta})$, the lower bound in (\\ref{eq:pip_sample}) becomes trivial, since it is non-positive, while $\\rho \\geq 0$. As for the upper bound, we increase the coefficients $\\rho_j$ to $\\rho'_j \\in [0, (q+1)\\beta n^q]$, so that $\\hat{\\rho}' = \\alpha_2 n^{q+\\delta}$. Then, $\\rho' \\leq (1+\\alpha_1)\\hat{\\rho}' = (1+\\alpha_1)\\alpha_2 n^{q+\\delta}$, with high probability, by the same Chernoff bound as above. Now the upper bound of (\\ref{eq:pip_sample}) follows from $\\rho \\leq \\rho'$, which holds for any instantiation of the random sample $R$.\n\nWe proceed to formalize the idea above. For simplicity of notation, we let $B = (q+1)\\beta n^q$ and $a_2 = \\alpha_2\/((q+1)\\beta)$ throughout the proof. For each sample $l$, $l = 1, \\ldots, r$, we let $X_l$ be a random variable distributed in $[0, 1]$. For each index $j$, if the $l$-th sample is $j$, $X_l$ becomes $\\rho_{j} \/ B$, if $x_j = 1$, and becomes $0$, otherwise. Therefore, $\\Exp[X_l] = \\hat{\\rho} \/ (B n)$. We let $X = \\sum_{l = 1}^r X_l$. Namely, $X$ is the sum of $r$ independent random variables identically distributed in $[0, 1]$. Using that $r = \\gamma n^{1-\\delta} \\ln n$, we have that $\\Exp[X] = \\gamma \\hat{\\rho} \\ln n \/ (B n^\\delta)$ and that $\\rho = B n X\/r = B n^\\delta X \/ (\\gamma \\ln n)$.\n\nWe distinguish between the case where $\\hat{\\rho} \\geq a_2 B n^{\\delta}$ and the case where $\\hat{\\rho} < a_2 B n^{\\delta}$.\nWe start with the case where $\\hat{\\rho} \\geq a_2 B n^{\\delta}$. Then, by Chernoff bounds%\n\\footnote{\\label{foot:chernoff2}We use the following bound (see e.g., \\cite[Theorem~1.1]{DP09}): Let $Y_1, \\ldots, Y_k$ be independent random variables identically distributed in $[0, 1]$ and let $Y = \\sum_{j=1}^k Y_j$. Then for all $\\e \\in (0, 1)$, $\\Prob[|Y - \\Exp[Y]| > \\e\\, \\Exp[Y]] \\leq 2\\exp(-\\e^2\\,\\Exp[Y]\/3)$.},\n\\begin{eqnarray*}\n \\Prob[|X - \\Exp[X]| > \\alpha_1 \\Exp[X]] & \\leq &\n 2\\exp\\!\\left(-\\frac{\\alpha_1^2 \\gamma \\hat{\\rho} \\ln n}{3 B n^{\\delta} }\\right) \\\\\n & \\leq & 2\\exp(-\\alpha_1^2 a_2 \\gamma \\ln n \/ 3) \\leq 2\/n^{d+1}\n\\end{eqnarray*}\nFor the second inequality, we use that $\\hat{\\rho} \\geq a_2 B n^{\\delta}$. For the last inequality, we use that $\\gamma \\geq 3(d+1)\/(\\alpha_1^2 a_2) = 3(d+1)(q+1)\\beta\/(\\alpha_1^2 \\alpha_2)$, since $a_2 = \\alpha_2\/((q+1)\\beta)$. Therefore, with probability at least $1 - 2\/n^{d+1}$,\n\\[\n (1-\\alpha_1) \\frac{\\gamma \\hat{\\rho} \\ln n}{B n^\\delta} \\leq X \\leq\n (1+\\alpha_1) \\frac{\\gamma \\hat{\\rho} \\ln n}{B n^\\delta}\n\\]\nMultiplying everything by $B n \/ r = B n^\\delta \/(\\gamma \\ln n)$, we have that with probability at least $1-2\/n^{d+1}$, $(1-\\alpha_1) \\hat{\\rho} \\leq \\rho \\leq (1+\\alpha_1) \\hat{\\rho}$, which clearly implies (\\ref{eq:pip_sample}).\n\nWe proceed to the case where $\\hat{\\rho} < a_2 B n^{\\delta}$. Then, $(1-\\alpha_1)\\hat{\\rho} < (1-\\alpha_1) a_2 B n^{\\delta} = (1-\\alpha_1) \\alpha_2 n^{q+\\delta}$. Therefore, since $\\rho \\geq 0$, because $\\rho_j \\geq 0$, for all $j \\in N$, the lower bound of (\\ref{eq:pip_sample}) on $\\rho$ is trivial.\nFor the upper bound, we show that with probability at least $1-1\/n^{d+1}$, $\\rho \\leq (1+\\alpha_1) a_2 B n^{\\delta} = (1+\\alpha_1)\\alpha_2n^{q+\\delta}$. To this end, we consider a sequence $(\\rho'_j)_{j \\in N}$ so that $\\rho_j \\leq \\rho'_j \\leq (q+1)\\beta n^q$, for all $j \\in N$, and\n\\( \\hat{\\rho}' = \\sum_{j \\in N} \\rho'_{j} x_j = a_2 B n^{q+\\delta} \\).\nWe can obtain such a sequence by increasing an appropriate subset of $\\rho_j$ up to $(q+1)\\beta n^q$ (if $\\vec{x}$ does not contain enough $1$'s, we may also change some $x_j$ from $0$ to $1$).\nFor the new sequence, we let $\\rho' = (n \/ r) \\sum_{j \\in R} \\rho'_{j} x_j$ and observe that $\\rho \\leq \\rho'$, for any instantiation of the random sample $R$.\nTherefore,\n\\[ \\Prob[\\rho > (1+\\alpha_1)\\alpha_2n^{q+\\delta}] \\leq\n \\Prob[\\rho' > (1+\\alpha_1)\\hat{\\rho}']\\,,\n\\]\nwhere we use that $\\hat{\\rho}' = a_2 B n^\\delta = \\alpha_2n^{q+\\delta}$.\nBy the choice of $\\hat{\\rho}'$, we can apply the same Chernoff bound as above and obtain that $\\Prob[\\rho' > (1+\\alpha_1)\\hat{\\rho}'] \\leq 1\/n^{d+1}$.\n\\qed\\end{proof}\nLemma~\\ref{l:sampling} is enough for \\MC\\ and graph optimization problems, where the estimations $\\rho_{i_1\\ldots i_{d-\\ell} j}$ are non-negative. For arbitrary smooth polynomials however, the estimations $\\rho_{i_1\\ldots i_{d-\\ell} j}$ may also be negative. So, we need a generalization of Lemma~\\ref{l:sampling} that deals with both positive and negative estimations. To this end, given a sequence of estimations $( \\rho_j )_{j \\in N}$, with $\\rho_j \\in [-(q+1)\\beta n^q, (q+1)\\beta n^q]$, we let $\\rho^+_j = \\max\\{\\rho_j, 0\\}$ and $\\rho^-_j = \\min\\{ \\rho_j, 0\\}$, for all $j \\in N$. Namely, $\\rho^+_j$ (resp. $\\rho^-_j$) is equal to $\\rho_j$, if $\\rho_j$ is positive (resp. negative), and $0$, otherwise. Moreover, we let \n\\[ \\rho^+ = (n \/ r) \\sum_{j \\in R} \\rho^+_{j} x_j\\,,\\ \\ \n \\hat{\\rho}^+ = \\sum_{j \\in N} \\rho^+_{j} x_j\\,,\\ \\ \n \\rho^- = (n \/ r) \\sum_{j \\in R} \\rho^-_{j} x_j \\mbox{\\ \\ and\\ \\ } \n \\hat{\\rho}^- = \\sum_{j \\in N} \\rho^-_{j} x_j \n\\]\nApplying Lemma~\\ref{l:sampling} once for positive estimations and once for negative estimations (with the absolute values of $\\rho_j^-$, $\\rho^-$ and $\\hat{\\rho}^-$, instead), we obtain that with probability at least $1 - 4\/n^{d+1}$, the following inequalities hold:\n\\begin{eqnarray*}\n (1-\\alpha_1)\\hat{\\rho}^+ - (1-\\alpha_1)\\alpha_2 n^{q+\\delta} \\leq & \\rho^+ & \\leq\n (1+\\alpha_1)\\hat{\\rho}^+ + (1+\\alpha_1)\\alpha_2 n^{q+\\delta} \\\\\n (1+\\alpha_1)\\hat{\\rho}^- - (1+\\alpha_1)\\alpha_2 n^{q+\\delta} \\leq & \\rho^- & \\leq\n (1-\\alpha_1)\\hat{\\rho}^- + (1-\\alpha_1)\\alpha_2 n^{q+\\delta}\n\\end{eqnarray*}\nUsing that $\\rho = \\rho^+ + \\rho^-$ and that $\\hat{\\rho} = \\hat{\\rho}^+ + \\hat{\\rho}^-$, we obtain the following generalization of Lemma~\\ref{l:sampling}.\n\\begin{lemma}[Sampling Lemma]\\label{l:sampling_gen}\nLet $\\vec{x} \\in \\{0, 1\\}^n$ and let $( \\rho_j )_{j \\in N}$ be any sequence such that for some integer $q \\geq 0$ and some constant $\\beta \\geq 1$, $|\\rho_j| \\leq (q+1)\\beta n^q$, for all $j \\in N$. For all integers $d \\geq 1$ and for all $\\alpha_1, \\alpha_2 > 0$, we let $\\gamma = \\Theta(d q \\beta\/(\\alpha_1^2 \\alpha_2))$ and let $R$ be a multiset of $r = \\gamma n^{1-\\delta} \\ln n$ indices chosen uniformly at random with replacement from $N$, where $\\delta \\in (0, 1]$ is any constant. If $\\rho = (n \/ r) \\sum_{j \\in R} \\rho_{j} x_j$, $\\hat{\\rho} = \\sum_{j \\in N} \\rho_{j} x_j$ and $\\rb = \\sum_{j \\in N} |\\rho_j|$, with probability at least $1 - 4\/n^{d+1}$,\n\\begin{equation}\\label{eq:pip_sample_gen}\n \\hat{\\rho} - \\alpha_1 \\rb - 2\\alpha_2 n^{q+\\delta} \\leq \\rho \\leq\n \\hat{\\rho} + \\alpha_1 \\rb + 2\\alpha_2 n^{q+\\delta}\n\\end{equation}\n\\end{lemma}\nFor all constants $\\e_1, \\e_2 > 0$ and all constants $c$, we use Lemma~\\ref{l:sampling_gen} with $\\alpha_1 = \\e_1$ and $\\alpha_2 = \\e_2\/2$ and obtain that for $\\gamma = \\Theta(d q \\beta \/(\\e^2_1 \\e_2))$, with probability at least $1 - 4\/n^{d+1}$, the following holds for any binary vector $\\vec{x}$ and any sequence of estimations $( \\rho_j )_{j \\in N}$ produced by Algorithm~\\ref{alg:estimate} with $\\vec{s} = \\vec{x}_R$ (note that in Algorithm~\\ref{alg:estimate}, the additive constant $c$ is included in the estimation $\\rho$ when its value is computed from the estimations $\\rho_j$).\n\\begin{equation}\\label{eq:pip_sample2}\n \\overbrace{c+\\frac{n}{r}\\sum_{j \\in R} \\rho_j x_j}^{\\rho}\n - \\e_1 \\overbrace{\\sum_{j \\in N} |\\rho_j|}^{\\rb} - \\e_2 n^{q+\\delta} \\leq\n c + \\sum_{j \\in N} x_j \\rho_j \\leq\n \\overbrace{c+\\frac{n}{r}\\sum_{j \\in R} \\rho_j x_j}^{\\rho} \n + \\e_1 \\overbrace{\\sum_{j \\in N} |\\rho_j|}^{\\rb} + \\e_2 n^{q+\\delta}\n\\end{equation}\nNow, let us consider ($d$-IP) with the estimations computed by Algorithm~\\ref{alg:estimate} with $\\vec{s} = \\vec{x}^\\ast_R$ (i.e., with the optimal assignment for the variables in the random sample $R$). Then, using (\\ref{eq:pip_sample2}) and taking the union bound over all constraints, which are at most $2n^{d-1}$, we obtain that with probability at least $1-8\/n^2$, the optimal solution $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP). So, from now on, we condition on the high probability event that $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP) and to ($d$-LP).\n\n\\subsection{The Value of Feasible Solutions to ($d$-LP)}\n\\label{s:pip_value}\n\nFrom now on, we focus on estimations $\\vec{\\rho}$ produced by $\\mathrm{Estimate}(p(\\vec{x}), d, R, \\vec{s})$, where $R$ is a random sample from $N$ and $\\vec{s} = \\vec{x}^\\ast_R$, and the corresponding programs ($d$-IP) and ($d$-LP). The analysis in Section~\\ref{s:pip_sampling} implies that $\\vec{x}^\\ast$ is a feasible solution to ($d$-IP) (and to ($d$-LP)), with high probability.\n\nWe next show that for any feasible solution $\\vec{y}$ of ($d$-LP) and any polynomial $q(\\vec{x})$ in the decomposition of $p(\\vec{x})$, the value of $q(\\vec{y})$ is close to the value of $c + \\sum_j y_j \\rho_j$ in the constraint of ($d$-LP) corresponding to $q$. Applying Lemma~\\ref{l:approx}, we show below (see Lemma~\\ref{l:approx_gen}) that $p(\\vec{y})$ is close to $c+\\sum_{j \\in N} y_j \\rho_j$, i.e., to the objective value of $\\vec{y}$ in ($d$-LP) and ($d$-IP), for any feasible solution $\\vec{y}$.\n\nTo state and prove the following lemma, we introduce \\emph{cumulative absolute value estimations} $\\tb_{i_1 \\ldots i_{d-\\ell}}$\\,, defined recursively as follows:\nFor level $\\ell = 1$ and each tuple $(i_1, \\ldots, i_{d-1}) \\in N^{d-1}$, we let $\\tb_{i_1 \\ldots i_{d-1}} = \\rb_{i_1 \\ldots i_{d-1}} = \\sum_{j \\in N} |c_{i_1 \\ldots i_{d-1}j}|$.\nFor each level $\\ell \\geq 2$ of the decomposition of $p(\\vec{x})$ and each tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$, we let $\\tb_{i_1 \\ldots i_{d-\\ell}} = \\rb_{i_1 \\ldots i_{d-\\ell}} + \\sum_{j \\in N} \\tb_{i_1 \\ldots i_{d-\\ell}j}$. Namely, each cumulative absolute value estimation $\\tb_{i_1 \\ldots i_{d-\\ell}}$ is equal to the sum of all absolute value estimations that appear below the root of the decomposition tree of $p_{i_1 \\ldots i_{d-\\ell}}(\\vec{x})$.\n\\begin{lemma}\\label{l:approx}\nLet $q(\\vec{x})$ be any $\\ell$-degree polynomial appearing in the decomposition of $p(\\vec{x})$, let $q(\\vec{x}) = c+\\sum_{j \\in N} x_j q_j(\\vec{x})$ be the decomposition of $q(\\vec{x})$, let $\\rho$ and $\\{ \\rho_j \\}_{j \\in N}$ be the estimations of $q$ and $\\{ q_j \\}_{j \\in N}$ produced by Algorithm~\\ref{alg:estimate} and used in ($d$-LP), and let $\\tb$ and $\\{ \\tb_j \\}_{j \\in N}$ be the corresponding cumulative absolute value estimations. Then, for any feasible solution $\\vec{y}$ of ($d$-LP)\n\\begin{equation}\\label{eq:approx}\n\\rho - \\e_1 \\tb - \\ell \\e_2 n^{\\ell - 1+\\delta} \\leq q(\\vec{y}) \\leq\n\\rho + \\e_1 \\tb + \\ell \\e_2 n^{\\ell - 1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe proof is by induction on the degree $\\ell$. The basis, for $\\ell=1$, is trivial, because in the decomposition of $q(\\vec{x})$, each $q_j(\\vec{x})$ is a constant $c_j$. Therefore, Algorithm~\\ref{alg:estimate} outputs $\\rho_j = c_j$ and\n\\[ q(\\vec{y}) = c + \\sum_{j \\in N} y_j q_j(\\vec{x})\n = c + \\sum_{j \\in N} y_j c_j\n \\in \\rho \\pm \\e_1 \\tb \\pm \\e_2 n^{\\delta}\\,,\n \\]\nwhere the inclusion follows from the feasibility of $\\vec{y}$ for ($d$-LP). We also use that at level $\\ell = 1$, $\\tb = \\rb$ (i.e., cumulative absolute value estimations and absolute value estimations are identical).\n\nWe inductively assume that (\\ref{eq:approx}) is true for all degree-$(\\ell-1)$ polynomials $q_j(\\vec{x})$ that appear in the decomposition of $q(\\vec{x})$ and establish the lemma for $q(\\vec{x}) = c + \\sum_{j \\in N} x_j q_j(\\vec{x})$. We have that:\n\\begin{align*}\n q(\\vec{y}) = c + \\sum_{j \\in N} y_j q_j(\\vec{y}) & \\in\n c + \\sum_{j \\in N} y_j \\left( \\rho_j \\pm \\e_1 \\tb_j\n \\pm (\\ell-1) \\e_2 n^{\\ell-2+\\delta} \\right)\\\\\n &= \\left(c + \\sum_{j \\in N} y_j \\rho_j \\right)\n \\pm \\e_1 \\sum_{j \\in N} y_j \\tb_j\n \\pm (\\ell-1) \\e_2 \\sum_{j \\in N} y_j n^{\\ell-2+\\delta} \\\\\n &\\in \\left(\\rho \\pm \\e_1 \\rb \\pm \\e_2 n^{\\ell-1+\\delta}\\right)\n \\pm \\e_1 \\sum_{j \\in N} \\tb_j \\pm (\\ell-1) \\e_2 n^{\\ell-1+\\delta}\\\\\n &\\in \\rho \\pm \\e_1 \\tb \\pm \\ell \\e_2 n^{\\ell-1+\\delta}\n\\end{align*}\nThe first inclusion holds by the induction hypothesis. The second inclusion holds because (i) $\\vec{y}$ is a feasible solution to ($d$-LP) and thus, $c + \\sum_{j \\in N} y_j \\rho_j$ satisfies the corresponding constraint; (ii) $\\sum_{j \\in N} y_j \\tb_j \\leq \\sum_{j \\in N} \\tb_j$; and (iii) $\\sum_{j \\in N} y_j \\leq n$. The last inclusion holds because $\\tb = \\rb + \\sum_{j \\in N} \\tb_j$, by the definition of cumulative absolute value estimations.\n\\qed\\end{proof}\nUsing Lemma~\\ref{l:approx} and the notion of cumulative absolute value estimations, we next show that $p(\\vec{y})$ is close to $c+\\sum_{j \\in N} y_j \\rho_j$, for any feasible solution $\\vec{y}$.\n\\begin{lemma}\\label{l:approx_gen}\nLet $p(\\vec{x}) = c+\\sum_{j \\in N} x_j p_j(\\vec{x})$ be the decomposition of $p(\\vec{x})$, let $\\{ \\rho_j \\}_{j \\in N}$ be the estimations of $\\{ p_j \\}_{j \\in N}$ produced by Algorithm~\\ref{alg:estimate} and used in ($d$-LP), and let $\\{ \\tb_j \\}_{j \\in N}$ be the corresponding cumulative absolute value estimations. Then, for any feasible solution $\\vec{y}$ of ($d$-LP)\n\\begin{equation}\\label{eq:approx_gen}\n p(\\vec{y}) \\in\n c+\\sum_{j \\in N} y_j \\rho_j \\pm \\e_1 \\sum_{j \\in N} \\tb_j \\pm (d-1)\\e_2 n^{d-1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nBy Lemma~\\ref{l:approx}, for any polynomial $p_j$, $p_j(\\vec{y}) \\in \\rho_j \\pm \\e_1 \\tb_j \\pm (d-1) \\e_2 n^{d-2+\\delta}$. Therefore,\n\\begin{align*}\n p(\\vec{y}) = c + \\sum_{j \\in N} y_j p_j(\\vec{y}) & \\in\n c + \\sum_{j \\in N} y_j \\left( \\rho_j \\pm \\e_1 \\tb_j\n \\pm (d-1) \\e_2 n^{d-2+\\delta} \\right)\\\\\n &= c + \\sum_{j \\in N} y_j \\rho_j\n \\pm \\e_1 \\sum_{j \\in N} y_j \\tb_j\n \\pm (d-1) \\e_2 \\sum_{j \\in N} y_j n^{d-2+\\delta} \\\\\n &\\in c + \\sum_{j \\in N} y_j \\rho_j\n \\pm \\e_1 \\sum_{j \\in N} \\tb_j\n \\pm (d-1) \\e_2 n^{d-1+\\delta}\n\\end{align*}\nThe second inclusion holds because $y_j \\in [0,1]$ and $\\sum_{j \\in N} y_j \\leq n$.\n\\qed\\end{proof}\n\n\\subsection{Randomized Rounding of the Fractional Optimum}\n\\label{s:pip_rounding}\n\nThe last step is to round the fractional optimum $\\vec{y}^\\ast = (y^\\ast_1, \\ldots, y^\\ast_n)$ of ($d$-LP) to an integral solution $\\vec{z} = (z_1, \\ldots, z_n)$ that almost satisfies the constraints of ($d$-IP) and has an expected objective value for ($d$-IP) very close to the objective value of $\\vec{y}^\\ast$.\n\nTo this end, we use randomized rounding, as in \\cite{RT87}. In particular, we set independently each $z_j$ to $1$, with probability $y_j^\\ast$, and to $0$, with probability $1-y_j^\\ast$. The analysis is based on the following lemma, whose proof is similar to the proof of Lemma~\\ref{l:sampling}.\n\\begin{lemma}\\label{l:rounding}\nLet $\\vec{y} \\in [0, 1]^n$ be any fractional vector and let $\\vec{z} \\in \\{0, 1\\}^n$ be an integral vector obtained from $\\vec{y}$ by randomized rounding. Also, let $( \\rho_j )_{j \\in N}$ be any sequence such that for some integer $q \\geq 0$ and some constant $\\beta \\geq 1$, $\\rho_j \\in [0, (q+1)\\beta n^q]$, for all $j \\in N$. For all integers $k \\geq 1$ and for all constants $\\alpha, \\delta > 0$ (and assuming that $n$ is sufficiently large), if $\\rho = \\sum_{j \\in N} \\rho_{j} z_j$ and $\\hat{\\rho} = \\sum_{j \\in N} \\rho_{j} y_j$, with probability at least $1 - 2\/n^{k+1}$,\n\\begin{equation}\\label{eq:rounding}\n (1-\\alpha)\\hat{\\rho} - (1-\\alpha)\\alpha n^{q+\\delta} \\leq \\rho \\leq\n (1+\\alpha)\\hat{\\rho} + (1+\\alpha)\\alpha n^{q+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWe first note that $\\Exp[\\rho] = \\hat{\\rho}$. If $\\hat{\\rho} = \\Omega(n^{q} \\ln n)$, then $|\\rho - \\hat{\\rho}| \\leq \\alpha \\hat{\\rho}$, with high probability, by standard Chernoff bounds. If $\\hat{\\rho} = o(n^{q} \\ln n)$, the lower bound in (\\ref{eq:rounding}) becomes trivial, because $\\rho \\geq 0$ and $o(n^{q} \\ln n) < \\alpha n^{q+\\delta}$, if $n$ is sufficiently large. As for the upper bound, we increase the coefficients $\\rho_j$ to $\\rho'_j \\in [0, (q+1)\\beta n^q]$, so that $\\hat{\\rho}' = \\Theta(n^{q} \\ln n)$. Then, the upper bound is shown as in the second part of the proof of Lemma~\\ref{l:sampling}.\n\nWe proceed to the formal proof. For simplicity of notation, we let $B = (q+1)\\beta n^q$ throughout the proof. For $j = 1, \\ldots, n$, we let $X_j = z_j \\rho_j \/ B$ be a random variable distributed in $[0, 1]$. Each $X_j$ independently takes the value $\\rho_{j} \/ B$, with probability $y_j$, and $0$, otherwise. We let $X = \\sum_{j = 1}^n X_j$ be the sum of these independent random variables. Then, $\\Exp[X] = \\hat{\\rho} \/ B$ and $X = \\sum_{j \\in N} z_j \\rho_j \/ B = \\rho\/B$.\n\nAs in Lemma~\\ref{l:sampling}, we distinguish between the case where $\\hat{\\rho} \\geq 3(k+1)B\\ln n\/\\alpha^2$ and the case where $\\hat{\\rho} < 3(k+1)B\\ln n\/\\alpha^2$.\nWe start with the case where $\\hat{\\rho} \\geq 3(k+1)B\\ln n\/\\alpha^2$. Then, by Chernoff bounds (we use the bound in footnote~\\ref{foot:chernoff2}),\n\\[\n \\Prob[|X - \\Exp[X]| > \\alpha \\Exp[X]]\n \\leq 2\\exp\\!\\left(-\\frac{\\alpha^2 \\hat{\\rho} }{3 B }\\right)\n \\leq 2\\exp(-(k+1)\\ln n) \\leq 2\/n^{k+1}\\,,\n\\]\nwhere we use that $\\hat{\\rho} \\geq 3(k+1)B\\ln n\/\\alpha^2$. Therefore, with probability at least $1 - 2\/n^{k+1}$,\n\\[\n (1-\\alpha) \\hat{\\rho} \/ B \\leq X \\leq (1+\\alpha) \\hat{\\rho} \/ B\n\\]\nMultiplying everything by $B$ and using that $X = \\rho \/ B$, we obtain that with probability at least $1 - 2\/n^{k+1}$, $(1-\\alpha) \\hat{\\rho} \\leq \\rho \\leq (1+\\alpha) \\hat{\\rho}$, which implies (\\ref{eq:rounding}).\n\nWe proceed to the case where $\\hat{\\rho} < 3(k+1)B\\ln n\/\\alpha^2$. Then, assuming that $n$ is large enough that $n^\\delta \/ \\ln n > 3(k+1)(q+1)\\beta \/ \\alpha^3$, we obtain that $(1-\\alpha)\\hat{\\rho} < (1-\\alpha) \\alpha n^{q+\\delta}$. Therefore, since $\\rho \\geq 0$, because $\\rho_j \\geq 0$, for all $j \\in N$, the lower bound of (\\ref{eq:rounding}) on $\\rho$ is trivial.\nFor the upper bound, we show that with probability at least $1-1\/n^{k+1}$, $\\rho \\leq (1+\\alpha) \\alpha n^{q+\\delta}$. To this end, we consider a sequence $(\\rho'_j)_{j \\in N}$ so that $\\rho_j \\leq \\rho'_j \\leq (q+1)\\beta n^q$, for all $j \\in N$, and\n\\[ \\hat{\\rho}' = \\sum_{j \\in N} \\rho'_{j} y_j = \\frac{3(k+1)B\\ln n}{\\alpha^2} \\]\nWe can obtain such a sequence by increasing an appropriate subset of $\\rho_j$ up to $(q+1)\\beta n^q$ (if $\\sum_{j \\in N} \\vec{y}$ is not large enough, we may also increase some $y_j$ up to $1$).\nFor the new sequence, we let $\\rho' = \\sum_{j \\in R} \\rho'_{j} z_j$ and observe that $\\rho \\leq \\rho'$, for any instantiation of the randomized rounding (if some $y_j$ are increased, the inequality below follows from a standard coupling argument).\nTherefore,\n\\[ \\Prob[\\rho > (1+\\alpha)\\alpha n^{q+\\delta}] \\leq\n \\Prob[\\rho' > (1+\\alpha)\\hat{\\rho}']\\,,\n\\]\nwhere we use that $\\hat{\\rho}' = 3(k+1)B\\ln n \/ \\alpha^2$ and that $\\alpha n^\\delta > 3(k+1)(q+1)\\beta \\ln n \/ \\alpha^2$, which holds if $n$ is sufficiently large.\nBy the choice of $\\hat{\\rho}'$, we can apply the same Chernoff bound as above and obtain that $\\Prob[\\rho' > (1+\\alpha)\\hat{\\rho}'] \\leq 1\/n^{k+1}$.\n\\qed\\end{proof}\nLemma~\\ref{l:rounding} implies that if the estimations $\\rho_j$ are non-negative, the rounded solution $\\vec{z}$ is almost feasible for ($d$-IP) with high probability. But, as in Section~\\ref{s:pip_sampling}, we need a generalization of Lemma~\\ref{l:rounding} that deals with both positive and negative estimations. To this end, we work as in the proof of Lemma~\\ref{l:sampling_gen}. Given a sequence of estimations $( \\rho_j )_{j \\in N}$, with $\\rho_j \\in [-(q+1)\\beta n^q, (q+1)\\beta n^q]$, we define $\\rho^+_j = \\max\\{\\rho_j, 0\\}$ and $\\rho^-_j = \\min\\{ \\rho_j, 0\\}$, for all $j \\in N$. Moreover, we let $\\rho^+ = \\sum_{j \\in N} \\rho^+_{j} z_j$, $\\hat{\\rho}^+ = \\sum_{j \\in N} \\rho^+_{j} y_j$, $\\rho^- = \\sum_{j \\in N} \\rho^-_{j} z_j$ and $\\hat{\\rho}^- = \\sum_{j \\in N} \\rho^-_{j} y_j$. Applying Lemma~\\ref{l:rounding}, once for positive estimations and once for negative estimations (with the absolute values of $\\rho_j^-$, $\\rho^-$ and $\\hat{\\rho}^-$, instead), we obtain that with probability at least $1 - 4\/n^{k+1}$,\n\\begin{eqnarray*}\n (1-\\alpha)\\hat{\\rho}^+ - (1-\\alpha)\\alpha n^{q+\\delta} \\leq & \\rho^+ & \\leq\n (1+\\alpha)\\hat{\\rho}^+ + (1+\\alpha)\\alpha n^{q+\\delta} \\\\\n (1+\\alpha)\\hat{\\rho}^- - (1+\\alpha)\\alpha n^{q+\\delta} \\leq & \\rho^- & \\leq\n (1-\\alpha)\\hat{\\rho}^- + (1-\\alpha)\\alpha n^{q+\\delta}\n\\end{eqnarray*}\nUsing that $\\rho = \\rho^+ + \\rho^-$ and that $\\hat{\\rho} = \\hat{\\rho}^+ + \\hat{\\rho}^-$, we obtain the following generalization of Lemma~\\ref{l:rounding}.\n\\begin{lemma}[Rounding Lemma]\\label{l:rounding_gen}\nLet $\\vec{y} \\in [0, 1]^n$ be any fractional vector and let $\\vec{z} \\in \\{0, 1\\}^n$ be an integral vector obtained from $\\vec{y}$ by randomized rounding. Also, let $( \\rho_j )_{j \\in N}$ be any sequence such that for some integer $q \\geq 0$ and some constant $\\beta \\geq 1$, $|\\rho_j| \\leq (q+1)\\beta n^q$, for all $j \\in N$. For all integers $k \\geq 1$ and for all constants $\\alpha, \\delta > 0$ (and assuming that $n$ is sufficiently large), if $\\rho = \\sum_{j \\in N} \\rho_{j} z_j$, $\\hat{\\rho} = \\sum_{j \\in N} \\rho_{j} y_j$ and $\\rb = \\sum_{j \\in N} |\\rho_j|$, with probability at least $1 - 4\/n^{k+1}$,\n\\begin{equation}\\label{eq:rounding_gen}\n \\hat{\\rho} - \\alpha\\rb - 2\\alpha n^{q+\\delta} \\leq \\rho \\leq\n \\hat{\\rho} + \\alpha\\rb + 2\\alpha n^{q+\\delta}\n\\end{equation}\n\\end{lemma}\nFor all constants $\\e_1, \\e_2 > 0$ and all constants $c$, we can use Lemma~\\ref{l:rounding_gen} with $\\alpha = \\max\\{\\e_1, \\e_2\/2\\}$ and obtain that for all integers $k \\geq 1$, with probability at least $1 - 4\/n^{k+1}$, the following holds for the binary vector $\\vec{z}$ obtained from a fractional vector $\\vec{y}$ by randomized rounding.\n\\begin{equation}\\label{eq:pip_rounding2}\n c + \\sum_{j \\in N} y_j \\rho_j -\n \\e_1 \\overbrace{\\sum_{j \\in N} |\\rho_j|}^{\\rb} - \\e_2 n^{q+\\delta} \\leq\n c + \\sum_{j \\in N} z_j \\rho_j\n \\leq c + \\sum_{j \\in N} y_j \\rho_j +\n \\e_1 \\overbrace{\\sum_{j \\in N} |\\rho_j|}^{\\rb} + \\e_2 n^{q+\\delta}\n\\end{equation}\nUsing (\\ref{eq:pip_rounding2}) with $k = 2(d+1)$, the fact that $\\vec{y}^\\ast$ is a feasible solution to ($d$-LP), and the fact that ($d$-LP) has at most $2n^{d-1}$ constraints, we obtain that $\\vec{z}$ is an almost feasible solution to ($d$-IP) with high probability. Namely, with probability at least $1-8\/n^{d+4}$, the integral vector $\\vec{z}$ obtained from the fractional optimum $\\vec{y}^\\ast$ by randomized rounding satisfies the following system of inequalities for all levels $\\ell \\geq 1$ and all tuples $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$ (for each level $\\ell \\geq 1$, we use $q = \\ell - 1$, since $|\\rho_{i_1\\ldots i_{d-\\ell}j}| \\leq \\ell \\beta n^{\\ell-1}$ for all $j \\in N$).\n\\begin{equation}\\label{eq:pip_deviation}\n c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j \\rho_{i_1\\ldots i_{d-\\ell}j} \\in\n \\rho_{i_1\\ldots i_{d-\\ell}} \\pm 2\\e_1 \\rb_{i_1\\ldots i_{d-\\ell}}\n \\pm 2\\e_2 n^{\\ell-1+\\delta}\n\\end{equation}\nHaving established that $\\vec{z}$ is an almost feasible solution to ($d$-IP), with high probability, we proceed as in Section~\\ref{s:cut_rounding}. By linearity of expectation, $\\Exp[ \\sum_{j \\in N} z_j \\rho_j ] = \\sum_{j \\in V} y^\\ast_j \\rho_j$. Moreover, the probability that $\\vec{z}$ does not satisfy (\\ref{eq:pip_deviation}) for some level $\\ell \\geq 1$ and some tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$ is at most $8\/n^{d+4}$ and the objective value of ($d$-IP) is at most $2(d+1)\\beta n^d$, because, due to the $\\beta$-smoothness property of $p(\\vec{x})$, $|p(\\vec{x}^\\ast)| \\leq (d+1)\\beta n^d$. Therefore, the expected value of a rounded solution $\\vec{z}$ that satisfies the family of inequalities (\\ref{eq:pip_deviation}) for all levels and tuples is least $\\sum_{j \\in V} y^\\ast_j \\rho_j - 1$ (assuming that $n$ is sufficiently large). Using the method of conditional expectations, as in \\cite{Rag88}, we can find in (deterministic) polynomial time an integral solution $\\vec{z}$ that satisfies the family of inequalities (\\ref{eq:pip_deviation}) for all levels and tuples and has $c + \\sum_{j \\in V} z_j \\rho_j \\geq c-1+\\sum_{j \\in V} y^\\ast_j \\rho_j$. As in Section~\\ref{s:cut_rounding}, we sometimes abuse the notation and refer to such an integral solution $\\vec{z}$ (computed deterministically) as the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding.\n\nThe following lemmas are similar to Lemma~\\ref{l:approx} and Lemma~\\ref{l:approx_gen}. They use the notion of cumulative absolute value estimations and show that the objective value $p(\\vec{z})$ of the rounded solution $\\vec{z}$ is close to the optimal value of ($d$-LP).\n\\begin{lemma}\\label{l:approx2}\nLet $\\vec{y}^\\ast$ be an optimal solution of ($d$-LP) and let $\\vec{z}$ be the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding (and the method of conditional expectations). Then, for any level $\\ell \\geq 1$ in the decomposition of $p(\\vec{x})$ and any tuple $(i_1, \\ldots, i_{d-\\ell}) \\in N^{d-\\ell}$,\n\\begin{equation}\\label{eq:approx2}\n p_{i_1\\ldots i_{d-\\ell}}(\\vec{z}) \\in\n \\rho_{i_1\\ldots i_{d-\\ell}} \\pm 2\\e_1 \\tb_{i_1\\ldots i_{d-\\ell}}\n \\pm 2\\ell \\e_2 n^{\\ell-1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe proof is by induction on the degree $\\ell$ and similar to the proof of Lemma~\\ref{l:approx}. The basis, for $\\ell=1$, is trivial, because in the decomposition of $p(\\vec{x})$, each $p_{i_1\\ldots i_{d}}(\\vec{x})$ is a constant $c_{i_1\\ldots i_{d}}$\\,. Therefore, $\\rho_{i_1\\ldots i_{d}} = c_{i_1\\ldots i_{d}}$ and\n\\[ p_{i_1\\ldots i_{d-1}}(\\vec{z}) =\n c + \\sum_{j \\in N} z_j p_{i_1\\ldots i_{d-1}j}(\\vec{z})\n = c + \\sum_{j \\in N} z_j c_{i_1\\ldots i_{d-1}j}\n \\in \\rho_{i_1\\ldots i_{d-1}} \\pm 2\\e_1 \\tb_{i_1\\ldots i_{d-1}} \\pm 2\\e_2 n^{\\delta}\\,,\n \\]\nwhere the inclusion follows from the approximate feasibility of $\\vec{z}$ for ($d$-LP), as expressed by (\\ref{eq:pip_deviation}). We also use that at level $\\ell = 1$, $\\tb_{i_1\\ldots i_{d-1}} = \\rb_{i_1\\ldots i_{d-1}}$.\n\nWe inductively assume that (\\ref{eq:approx2}) is true for the values of all degree-$(\\ell-1)$ polynomials $p_{i_1\\ldots i_{d-\\ell}j}$ at $\\vec{z}$ and establish the lemma for $p_{i_1\\ldots i_{d-\\ell}}(\\vec{z}) = c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j p_{i_1\\ldots i_{d-\\ell}j}(\\vec{z})$. We have that:\n\\begin{align*}\n p_{i_1\\ldots i_{d-\\ell}}(\\vec{z}) & =\n c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j p_{i_1\\ldots i_{d-\\ell}j}(\\vec{z}) \\\\\n & \\in\n c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j \\left( \\rho_{i_1\\ldots i_{d-\\ell}j}\n \\pm 2 \\e_1 \\tb_{i_1\\ldots i_{d-\\ell}j}\n \\pm 2(\\ell-1) \\e_2 n^{\\ell-2+\\delta} \\right)\\\\\n &= \\left(c_{i_1\\ldots i_{d-\\ell}} +\n \\sum_{j \\in N} z_j \\rho_{i_1\\ldots i_{d-\\ell}j} \\right)\n \\pm 2 \\e_1 \\sum_{j \\in N} z_j \\tb_{i_1\\ldots i_{d-\\ell}j}\n \\pm 2 (\\ell-1) \\e_2 \\sum_{j \\in N} z_j n^{\\ell-2+\\delta} \\\\\n &\\in \\left(\\rho_{i_1\\ldots i_{d-\\ell}}\n \\pm 2 \\e_1 \\rb_{i_1\\ldots i_{d-\\ell}}\n \\pm 2 \\e_2 n^{\\ell-1+\\delta}\\right)\n \\pm 2 \\e_1 \\sum_{j \\in N} \\tb_{i_1\\ldots i_{d-\\ell}j}\n \\pm 2 (\\ell-1) \\e_2 n^{\\ell-1+\\delta}\\\\\n &\\in \\rho_{i_1\\ldots i_{d-\\ell}} \\pm 2 \\e_1 \\tb_{i_1\\ldots i_{d-\\ell}}\n \\pm 2 \\ell \\e_2 n^{\\ell-1+\\delta}\n\\end{align*}\nThe first inclusion holds by the induction hypothesis. The second inclusion holds because: (i) $\\vec{z}$ is an approximately feasible solution to ($d$-IP) and thus,\n$c_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} z_j \\rho_{i_1\\ldots i_{d-\\ell}j}$\nsatisfies (\\ref{eq:pip_deviation}); (ii) $\\sum_{j \\in N} z_j \\tb_{i_1\\ldots i_{d-\\ell}j} \\leq \\sum_{j \\in N} \\tb_{i_1\\ldots i_{d-\\ell}j}$; and (iii) $\\sum_{j \\in N} z_j \\leq n$. The last inclusion holds because $\\tb_{i_1\\ldots i_{d-\\ell}} = \\rb_{i_1\\ldots i_{d-\\ell}} + \\sum_{j \\in N} \\tb_{i_1\\ldots i_{d-\\ell}j}$, by the definition of cumulative absolute value estimations.\n\\qed\\end{proof}\n\\begin{lemma}\\label{l:approx2_gen}\nLet $\\vec{y}^\\ast$ be an optimal solution of ($d$-LP) and let $\\vec{z}$ be the integral solution obtained from $\\vec{y}^\\ast$ by randomized rounding (and the method of conditional expectations). Then,\n\\begin{equation}\\label{eq:approx2_gen}\n p(\\vec{z}) \\in c + \\sum_{j \\in N} z_j \\rho_j\n \\pm 2\\e_1 \\sum_{j \\in N} \\tb_{j}\n \\pm 2 (d-1) \\e_2 n^{d-1+\\delta}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nBy Lemma~\\ref{l:approx2}, for any polynomial $p_j$ appearing in the decomposition of $p(\\vec{x})$, we have that $p_j(\\vec{z}) \\in \\rho_j \\pm 2 \\e_1 \\tb_j \\pm 2 (d-1) \\e_2 n^{d-2+\\delta}$. Therefore,\n\\begin{align*}\n p(\\vec{z}) = c + \\sum_{j \\in N} z_j p_j(\\vec{z}) & \\in\n c + \\sum_{j \\in N} z_j \\left( \\rho_j \\pm 2 \\e_1 \\tb_j\n \\pm 2 (d-1) \\e_2 n^{d-2+\\delta} \\right)\\\\\n &= c + \\sum_{j \\in N} z_j \\rho_j\n \\pm 2 \\e_1 \\sum_{j \\in N} z_j \\tb_j\n \\pm 2 (d-1) \\e_2 \\sum_{j \\in N} z_j n^{d-2+\\delta} \\\\\n &\\in c + \\sum_{j \\in N} z_j \\rho_j\n \\pm 2 \\e_1 \\sum_{j \\in N} \\tb_j\n \\pm 2 (d-1) \\e_2 n^{d-1+\\delta}\n\\end{align*}\nThe second inclusion holds because $z_j \\in \\{ 0,1\\}$ and $\\sum_{j \\in N} z_j \\leq n$.\n\\qed\\end{proof}\n\n\\input{estimations}\n\n\\subsection{The Final Algorithmic Result}\\label{s:pip_together}\n\nWe are ready now to conclude this section with the following theorem.\n\\begin{theorem}\\label{th:pip_scheme}\nLet $p(\\vec{x})$ be an $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial. Then, for any $\\eps > 0$, we can compute, in time $2^{O(d^7 \\beta^3 n^{1-\\delta} \\ln n\/\\eps^3)}$ and with probability at least $1-8\/n^2$, a binary vector $\\vec{z}$ so that $p(\\vec{z}) \\geq p(\\vec{x}^\\ast) - \\eps n^{d-1+\\delta}$, where $\\vec{x}^\\ast$ is the maximizer of $p(\\vec{x})$.\n\\end{theorem}\n\\begin{proof}\nBased upon the discussion above in this section, for any constant $\\eps > 0$, if $p(\\vec{x})$ is an $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial, the algorithm described in the previous sections computes an integral solution $\\vec{z}$ that approximately maximizes $p(\\vec{x})$. Specifically, setting $\\e_1 = \\eps\/(4 d(d-1)\\beta)$ $\\e_2 = \\eps\/(8(d-1))$, $p(\\vec{z})$ satisfies the following with probability at least $1-8\/n^2$\\,:\n\\begin{eqnarray*}\n p(\\vec{z}) & \\geq & \\left(c + \\sum_{j \\in N} y^\\ast_j \\rho_j\\right)\n - \\frac{\\eps}{2d(d-1)\\beta} \\sum_{j \\in N} \\tb_{j}\n - \\eps n^{d-1+\\delta} \/ 4\\\\\n & \\geq & \\left(c + \\sum_{j \\in N} y^\\ast_j \\rho_j\\right) -\n \\eps n^{d-1+\\delta} \/ 2\\\\\n & \\geq & \\left(c + \\sum_{j \\in N} x_j^\\ast \\rho_j\\right) -\n \\eps n^{d-1+\\delta} \/ 2\\\\\n & \\geq & \\left(p(\\vec{x}^\\ast) - \\frac{\\eps}{4d(d-1)\\beta} \\sum_{j \\in N} \\tb_{j}\n - \\eps n^{d-1+\\delta} \/ 8\\right) -\n \\eps n^{d-1+\\delta} \/ 2\\\\\n & \\geq & p(\\vec{x}^\\ast) - \\eps n^{d-1+\\delta}\n\\end{eqnarray*}\nThe first inequality follows from Lemma~\\ref{l:approx2_gen}. The second inequality follows from the hypothesis that $p(\\vec{x})$ is $\\beta$-smooth and $\\delta$-bounded. Then Lemma~\\ref{l:cum_est} implies that $\\sum_{j \\in N} \\tb_{j} \\leq \\frac{d(d-1)}{2}\\beta n^{d-1+\\delta}$\\,. As in Section~\\ref{s:values}, we assume that the constant hidden in the definition of $p(\\vec{x})$ as a $\\delta$-bounded polynomial is $1$. If this constant is some $\\kappa\\geq 1$, we should also divide $\\e_1$ by $\\kappa$. The third inequality holds because $\\vec{y}^\\ast$ is an optimal solution to ($d$-LP) and $\\vec{x}^\\ast$ is a feasible solution to ($d$-LP). The fourth inequality follows from Lemma~\\ref{l:approx_gen}. For the last inequality, we again use Lemma~\\ref{l:cum_est}. This concludes the proof of Theorem~\\ref{th:pip_scheme}.\\qed\\end{proof}\n\n\\section{Notation and Preliminaries}\n\\label{s:prelim}\n\nAn $n$-variate degree-$d$ polynomial $p(\\vec{x})$ is \\emph{$\\beta$-smooth} \\cite{AKK99}, for some constant $\\beta \\geq 1$, if for every $\\ell \\in \\{ 0, \\ldots, d\\}$, the absolute value of each coefficient of each degree-$\\ell$ monomial in the expansion of $p(\\vec{x})$ is at most $\\beta n^{d - \\ell}$.\nAn $n$-variate degree-$d$ $\\beta$-smooth polynomial $p(\\vec{x})$ is \\emph{$\\delta$-bounded}, for some constant $\\delta \\in (0, 1]$, if for every $\\ell$, the sum, over all degree-$\\ell$ monomials in $p(\\vec{x})$, of the absolute values of their coefficients is $O(\\beta n^{d-1+\\delta})$. Therefore, for any $n$-variate degree-$d$ $\\beta$-smooth $\\delta$-bounded polynomial $p(\\vec{x})$ and any $\\vec{x} \\in \\{ 0, 1\\}^n$, $|p(\\vec{x})| = O(d \\beta n^{d-1+\\delta})$.\n\nThroughout this work, we treat $\\beta$, $\\delta$ and $d$ as fixed constants and express the running time of our algorithm as a function of $n$, i.e., the number of variables in $p(\\vec{x})$.\n\n\\noindent{\\bf Optimization Problem.}\nOur approximation schemes for almost sparse instances of \\MC, \\kSAT, and \\kCSP\\ are obtained by reducing them to the following problem: Given an $n$-variate $d$-degree $\\beta$-smooth $\\delta$-bounded polynomial $p(\\vec{x})$, we seek a binary vector $\\vec{x}^\\ast \\in \\{0, 1\\}^n$ that maximizes $p$, i.e., for all binary vectors $\\vec{y} \\in \\{0, 1\\}^n$, $p(\\vec{x}^\\ast) \\geq p(\\vec{y})$.\n\n\\noindent{\\bf Polynomial Decomposition and General Approach.}\nAs in \\cite[Lemma~3.1]{AKK99}, our general approach is motivated by the fact that any $n$-variate $d$-degree $\\beta$-smooth polynomial $p(\\vec{x})$ can be naturally decomposed into a collection of $n$ polynomials $p_j(\\vec{x})$. Each of them has degree $d-1$ and at most $n$ variables and is $\\beta$-smooth.\n\\begin{proposition}[\\cite{AKK99}]\\label{pr:decomposition}\nLet $p(\\vec{x})$ be any $n$-variate degree-$d$ $\\beta$-smooth polynomial. Then, there exist a constant $c$ and degree-$(d-1)$ $\\beta$-smooth polynomials $p_j(\\vec{x})$ such that\n\\( p(\\vec{x}) = c + \\sum_{j = 1}^n x_j p_j(\\vec{x}) \\).\n\\end{proposition}\n\\begin{proof} The proposition is shown in \\cite[Lemma~3.1]{AKK99}. We prove it\nhere just for completeness. Each polynomial $p_j(\\vec{x})$ is obtained from\n$p(\\vec{x})$ if we keep only the monomials with variable $x_j$ and pull $x_j$\nout, as a common factor. The constant $c$ takes care of the constant term in\n$p(\\vec{x})$. Each monomial of degree $\\ell$ in $p(\\vec{x})$ becomes a monomial\nof degree $\\ell-1$ in $p_j(\\vec{x})$, which implies that the degree of\n$p_j(\\vec{x})$ is $d-1$. Moreover, by the $\\beta$-smoothness condition, the\ncoefficient $t$ of each degree-$\\ell$ monomial in $p(\\vec{x})$ has $|t| \\leq\n\\beta n^{d - \\ell}$. The corresponding monomial in $p_j(\\vec{x})$ has degree\n$\\ell-1$ and the same coefficient $t$ with $|t| \\leq \\beta n^{d - 1 -\n(\\ell-1)}$. Therefore, if $p(\\vec{x})$ is $\\beta$-smooth, each $p_j(\\vec{x})$\nis also $\\beta$-smooth. \\qed\\end{proof}\n\\noindent{\\bf Graph Optimization Problems.}\nLet $G(V, E)$ be a (simple) graph with $n$ vertices and $m$ edges. For each vertex $i \\in V$, $N(i)$ denotes $i$'s neighborhood in $G$, i.e., $N(i) = \\{ j \\in V: \\{i, j\\} \\in E\\}$. We let $\\deg(i) = |N(i)|$ be the degree of $i$ in $G$ and $\\Delta = 2|E|\/n$ denote the average degree of $G$.\nWe say that a graph $G$ is \\emph{$\\delta$-almost sparse}, for some constant $\\delta \\in (0, 1]$, if $m = \\Omega(n^{1+\\delta})$ (and thus, $\\Delta = \\Omega(n^\\delta)$).\n\nIn \\MC, we seek a partitioning of the vertices of $G$ into two sets $S_0$ and\n$S_1$ so that the number of edges with endpoints in $S_0$ and $S_1$ is\nmaximized. If $G$ has $m$ edges, the number of edges in the optimal cut is at\nleast $m\/2$.\n\nIn \\kDense, given an undirected graph $G(V, E)$, we seek a subset $C$ of $k$ vertices so that the induced subgraph $G[C]$ has a maximum number of edges. \n\n\\noindent{\\bf Constraint Satisfaction Problems.}\nAn instance of (boolean) \\kCSP\\ with $n$ variables consists of $m$ boolean constraints $f_1, \\ldots, f_m$, where each $f_j : \\{ 0, 1\\}^k \\to \\{0, 1\\}$ depends on $k$ variables and is satisfiable, i.e., $f_j$ evaluates to $1$ for some truth assignment. We seek a truth assignment to the variables that maximizes the number of satisfied constraints. \\kSAT\\ is a special case of \\kCSP\\ where each constraint $f_j$ is a disjunction of $k$ literals. An averaging argument implies that the optimal assignment of a \\kCSP\\ (resp. \\kSAT) instance with $m$ constraints satisfies at least $2^{-k} m$ (resp. $(1-2^{-k})m$) of them. We say that an instance of \\kCSP\\ is \\emph{$\\delta$-almost sparse}, for some constant $\\delta \\in (0, 1]$, if the number of constraints is $m = \\Omega(n^{k-1+\\delta})$.\n\nUsing standard arithmetization techniques (see e.g., \\cite[Sec.~4.3]{AKK99}), we can reduce any instance of \\kCSP\\ with $n$ variables to an $n$-variate degree-$k$ polynomial $p(\\vec{x})$ so that the optimal truth assignment for \\kCSP\\ corresponds to a maximizer $\\vec{x}^\\ast \\in \\{0, 1\\}$ of $p(\\vec{x})$ and the value of the optimal \\kCSP\\ solution is equal to $p(\\vec{x}^\\ast)$. Since each $k$-tuple of variables can appear in at most $2^k$ different constraints, $p(\\vec{x})$ is $\\beta$-smooth, for $\\beta \\in [1, 4^k]$, and has at least $m$ and at most $4^k m$ monomials. Moreover, if the instance of \\kCSP\\ has $m = \\Theta(n^{k-1+\\delta})$ constraints, then $p(\\vec{x})$ is $\\delta$-bounded and its maximizer $\\vec{x}^\\ast$ has $p(\\vec{x}^\\ast) = \\Omega(n^{k-1+\\delta})$.\n\n\\noindent{\\bf Notation and Terminology.}\nAn algorithm has \\emph{approximation ratio} $\\rho \\in (0, 1]$ (or is \\emph{$\\rho$-approximate}) if for all instances, the value of its solution is at least $\\rho$ times the value of the optimal solution.\n\nFor graphs with $n$ vertices or CSPs with $n$ variables, we say that an event $E$ happens with high probability (or whp.), if $E$ happens with probability at least $1-1\/n^c$, for some constant $c \\geq 1$.\n\nFor brevity and clarity, we sometimes write $\\alpha \\in (1\\pm \\e_1) \\beta \\pm \\e_2 \\gamma$, for some constants $\\e_1, \\e_2 > 0$, to denote that $(1-\\e_1)\\beta - \\e_2 \\gamma \\leq \\alpha \\leq (1+\\e_1)\\beta + \\e_2 \\gamma$.\n\n\\endinput\n\nExplain the difference between $O(n^{k-1+\\delta})$ and exactly $n^{k-1+\\delta}$, in particular for graphs. Either hidden in $\\beta$ or hidden in a tiny increase of $\\delta$.\n\n\nWe note that any (unweighted) binary constraint satisfaction problem with at most $d$ variables per constraint can be cast in the framework of smooth polynomial maximization. Several classical optimization problems, such as \\MC, {\\sc Max}-DICUT and \\kSAT\\ and {\\sc Max}-$k$-{\\sc Densest Subgraph}, reduce to smooth polynomial maximization (possibly under linear constraints).\n\n\n\nApplying the same idea recursively, we can further decompose each $(d-1)$-degree polynomial $p_{i_1}(\\vec{x})$ into $n$ $(d-2)$-degree polynomials $p_{i_1 j}(\\vec{x})$ such that\n$p_{i_1}(\\vec{x}) = c_{i_1} + \\sum_{j \\in N} x_j p_{i_1 j}(\\vec{x})$, etc.\nAt the basis of the recursion, we have polynomials $p_{i_1\\ldots i_{d-1}}(\\vec{x})$ of degree $1$, one for each $(d-1)$-tuple of indices $(i_1, \\ldots, i_{d-1}) \\in N^{d-1}$, which can be written as\n\\[ p_{i_1\\ldots i_{d-1}}(\\vec{x}) = c_{i_1\\ldots i_{d-1}} +\n \\sum_{j \\in N} x_j c_{i_1\\ldots i_{d-1} j}\\,,\n\\]\nwhere $c_{i_1\\ldots i_{d-1} j}$ are constants. Due to $\\beta$-smoothness, $|c_{i_1\\ldots i_{d-1} j}| \\leq \\beta$ and $|c_{i_1\\ldots i_{d-1}}| \\leq \\beta n$. Inductively, $\\beta$-smoothness implies that each polynomial $p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})$ of degree $\\ell \\geq 1$ in this decomposition has $|p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})| \\leq \\ell \\beta n^{\\ell}$ and $|c_{i_1\\ldots i_{d-\\ell}}+p_{i_1\\ldots i_{d-\\ell}}(\\vec{x})| \\leq (\\ell+1)\\beta n^{\\ell}$, for all vectors $\\vec{x} \\in \\{0, 1\\}^n$. This decomposition can be performed in a unique way if we insist that $i_1 < i_2 < \\cdots < i_{d-1}$, but this is not relevant for our analysis. \n\n\\section{Conclusions and Directions for Further Research}\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Opportunities and Challenges}\n\\label{sec:motivation}\n\nIn multi-programmed systems, NDA-integrated memory devices will be main memory for some processes and accelerators for other processes. Furthermore, the host and NDAs should access the same memory in parallel when collaboratively processing data. Under this scenario, it is necessary to effectively share memory between the host and NDAs. In this section, we identify opportunities to better utilize internal rank bandwidth compared to prior approaches and discuss four problems that we solve to utilize that bandwidth.\n\n\\subsection{Prior Approaches}\n\\label{subsec:prior_work}\n\nPrior work \\cite{farmahini2015nda} proposes two ways to share memory between the host and NDAs. First, the ownership of each rank is ping-ponged between the host and NDAs in a coarse-grain manner. Before ownership is exchanged, all the banks are precharged so that the next owner can start accessing memory from the initialized state. Since warming up memory takes time, ownership transitioning should be done in coarse granularity to amortize this overhead. However, coarse-grain ownership switching results in halving the performance of both owners compared to their ideal performance. \n\nThe second way is to partition ranks into two groups with each processor having exclusive ownership over one group of ranks. This approach eliminates the source of contention and ownership switching overhead. However, a large portion of memory capacity should be assigned to NDAs and the potential bandwidth gain of NDAs is limited by the number of ranks dedicated to NDAs. \n\n\\subsection{Opportunity: Rank Idle Periods}\n\\label{subsec:opportunities}\nBecause multiple ranks share the command and data buses within each channel, the host can access only one rank at a time per channel. In addition, to avoid rank switching penalty, memory controllers tend to minimize rank interleaving. As a result, ranks are often not accessed by the host for certain periods of time. \\fig{fig:motiv_rank_idle} shows the bandwidth utilization of rank internal buses when only host programs are executed. Our application mixes and the baseline configuration are summarized in Table \\ref{tab:eval_config}. Overall, about 60\\% of the internal-bus bandwidth is unused. However, the majority of idle periods are just $10-250$ cycles. \n\nBy opportunistically issuing NDA memory commands in these idle periods, we can better utilize internal rank bandwidth. Compared to the prior approaches, \\textit{fine-grain interleaving of NDA access minimally impacts the performance and memory capacity of the host while maximizes the utilization of rank bandwidth}. \n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{fig\/motiv_rank_idle.pdf}\n\t\\caption{Rank idle-time breakdown vs. idleness granularity.}\n\t\\label{fig:motiv_rank_idle}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\\subsection{Challenge 1: Fine-Grained Access Interleaving}\n\\label{subsec:challenges}\nTo opportunistically issue NDA commands, mechanisms for fine-grain mode switching are required, raising the following challenges. \n\n\\medskip\n\\noindent\\textbf{\\textit{Extra Bank Conflicts.}}\nSince the host and NDAs share banks, fine-grain access interleaving is likely to cause additional bank conflicts. Opening and closing rows incur overhead and hinder utilizing rank bandwidth within short idle periods. \n\n\\medskip\n\\noindent\\textbf{\\textit{Read\/Write Turnaround Time.}}\nAs discussed in Section \\ref{sec:background}, interleaving read and write operations to the same rank incurs extra overhead compared to issuing the same command type back to back \\cite{stuecheli2010virtual}. The host mitigates this overhead by buffering operations with caches and write buffers. However, without coordination, the host and NDAs may issue different types of transactions, which are then interleaved if both host and NDA run in parallel. Therefore, we need a mechanism to throttle write transactions of NDAs when needed and allow issuing when the rank is idle for long enough time. \n\n\\medskip\n\\noindent\\textbf{\\textit{Overhead of State Coordination.}}\nFor DIMM-type DRAM devices, each NDA needs its own memory controller to allow NDAs to utilize the untapped bandwidth. This results in two memory controllers managing the bank and timing state of each rank. To synchronize the state between the two memory controllers, prior work adopts the precharge-all (PREA) mechanism. However, fine-grain mode switching will incur significant overhead not only because of PREA command overhead itself but also because of the warm-up time required after mode switching. \n\n\n\\medskip\n\\subsection{Challenge 2: Unified Data Layout for Collaboration}\nWhen either just the host or just the NDAs own a rank for a fairly long time, we can customize data layout and address mapping for each processor, possibly copying and laying out data differently when switching access modes. However, concurrent NDA and host access to the same data requires a single data layout and address mapping that works well for both the host and NDA at the same time. Otherwise, two copies of data with different layouts are necessary, incurring high capacity overhead. \n\n\n\n\n\\section{Background}\n\\label{sec:background}\n\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{DRAM Basics.}}\nA memory system is composed of memory channels that operate independently. In each memory channel, one or more memory modules (DIMMs) share command\/address (C\/A) and data bus. A DIMM is usually composed of one or two physical ranks where all chips in the same rank operate together. Each chip and thus rank is composed of multiple banks and bank state is independent. Each bank can be in an opened or closed state and, if opened, which row is opened. To access a certain row, the target row must be opened first. If another row is already open, it must be closed before the target row is opened, which is called \\textit{bank conflict} and increases access latency. The DRAM protocol specifies the timing parameters and protocol accessing DRAM. These are managed by a per-channel memory controller.\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Address Mapping.}}\nThe memory controller translates OS-managed physical addresses into DRAM addresses, which are composed of indices to channel, rank, bank, row, and column. Typically, memory controllers follow the following policies in their address mapping to minimize access latency: interleaving address across channels with fine granularity is beneficial since they can be accessed independently from each other. On the other hand, ranks are interleaved at coarse granularity since switching to other ranks in the same channel incurs a penalty. In addition, XOR-based hash mapping functions are used when determining channel, rank, and bank addresses to maximally exploit bank-level parallelism. This also minimizes bank conflicts when multiple rows are accessed with the same access pattern since the hash function shuffles the bank address order \\cite{zhang2000permutation}. To accomplish this, some row address bits are used along with channel, rank, and bank address bits~\\cite{pessl2015reverse}.\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Write-to-Read Turnaround Time.}}\nIn general, interleaving read and write DRAM transactions incurs higher latency than issuing the same transaction type back to back. Issuing a read transaction immediately following a write suffers from particularly high penalty. The memory controller issues the write command and loads data to the bus after tCWL cycles. Then, data is transferred for tBL cycles to the DRAM device and written to the cells. The next read command can only be issued after tWTR cycles, which guarantees no conflict on the IO circuits in DRAM. The high penalty stems from the fact that the actual write happens at the end of the transaction whereas a read happens right after it is issued. For this reason, the opposite order, read to write, has lower penalty. \n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{NDA Basics.}}\nNear-data accelerators add processing elements near memory to overcome the physical constraints that limit host memory bandwidth. \\medel{Since memory channels are independent,} Host peak memory bandwidth is determined by the number of channels and peak bandwidth per channel. \\meadd{Any NDA accesses on the memory side of a channel can potentially increase overall system bandwidth.} \n\\meadd{For example. a memory module with multiple ranks offers more bandwidth in the module than available at the channel. Similarly, multiple banks on a DRAM die can also offer more bandwidth than available off of a DRAM chip.}\n\\medel{However, the number of ranks in the system does not affect the peak memory bandwidth of the host since only one rank per channel can transfer data to the host at any given time over the shared bus. On the other hand, near-data accelerators (NDAs) can access data internally without contending for the shared bus. This enables higher peak bandwidth than the host can achieve.}\nHowever, because NDAs only offer a BW advantage when they access data in their local memory, data layout is crucial for performance. A naive layout may result in frequent data movement among NDAs and with the host. \\medel{In this paper, we assume that inter-NDA communication is only done through the host (alternatives are discussed in~\\cite{kim2013memory,poremba2017there}).}\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Baseline NDA Architecture.}}\nOur work targets NDAs that are integrated within high-capacity memory modules such that their role as both main memory and as accelerators is balanced. Specifically, our baseline NDA devices are 3D-integrated within DRAM chips on a module (DIMM), similar to 3DS DDR4 \\cite{ddr43ds} yet a logic die is added. DIMMs offer high capacity and predictable memory access\\mcut{, which are the required features for main memory}. \\meadd{Designs with similar characteristics include on-DIMM PEs~\\cite{ibm_pim_dimm,alian2018nmp} and on-chip PEs within banks~\\cite{upmem}.}\nAlternatively, NDAs can utilize high-bandwidth devices, such as the hybrid memory cube (HMC) \\cite{pawlowski2011hybrid} or high bandwidth memory (HBM) \\cite{standard2013high}. These offer high internal bandwidth but have limited capacity and high cost due to numerous point-to-point connections to memory controllers \\cite{asghari2016chameleon}. HMC provides capacity scaling via a network but this results in high access latency and cost. HBM does not provide such solutions. As a result, HBM devices are better for standalone accelerators than for main memory. \n\n\\hpcacut{\n\\mattan{I copied this to intro. Probably need to rearrange some things here.}\n\\fig{fig:baseline_nda} illustrates our baseline NDA architecture. Each DIMM is composed of multiple chips, with one or more DRAM dice stacked on top of a logic die in each chip, using the low-cost commodity 3DS approach. Processing elements (PEs) and a memory controller are located on the logic die. Each PE can access memory internally through the NDA memory controller. However, this internal access cannot conflict with external accesses from the host CPU (host). Therefore, each rank is in either host or NDA access mode and only one can access it at any given time. The host uses chip-address memory-mapped registers to control the NDAs~\\cite{farmahini2015nda}. \n}\n\n\\begin{table}\\centering\n \\ra{1.2}\n \\small\n\\begin{tabular}{@{}llll@{}}\\toprule\nOperations & Description & Operations & Description \\\\\n\\midrule\nAXPBY & ${\\vec{z} = \\alpha \\vec{x} + \\beta \\vec{y}}$ & DOT & ${c = \\vec{x} \\cdot \\vec{y}}$ \\\\\nAXPBYPCZ & ${\\vec{w} = \\alpha \\vec{x} + \\beta \\vec{y} + \\gamma \\vec{z}}$ & \tNRM2 & ${c = \\sqrt{\\vec{x} \\cdot \\vec{x}}}$ \\\\\nAXPY & ${\\vec{y} = \\alpha \\vec{y} + \\vec{x}}$ & SCAL & ${\\vec{x} = \\alpha \\vec{x}}$ \\\\\nCOPY & ${\\vec{y} = \\vec{x}}$ & GEMV & ${\\vec{y} = A\\vec{x}}$ \\\\\nXMY & ${\\vec{z} = \\vec{x} \\odot \\vec{y}}$ & & \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Example NDA operations used in our case-study application. Chopim is not limited to these operations.}\n\\label{tab:nda_ops} \n\\vspace*{-4mm}\n\\end{table}\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Coherence.}}\nCoherence mechanisms between the host and NDAs have been studied in prior NDA work~\\cite{ahn2015pim,boroumand2019conda,boroumand2016lazypim} and can be used as is with Chopim. We therefore do not focus on coherence in this paper. In our experiments, we use the existing coherence approach of explicitly and infrequently copying the small amount of data that is not read-only using cache bypassing and memory fences. \n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{Address Translation.}}\n\\meadd{Application use of NDAs requires virtual to physical address translation. Some prior work~\\cite{hsieh2016accelerating,hong2016accelerating,gao2015practical} proposes address translation within NDAs to enable independent NDA execution without host assist. This increases both NDA and system complexity. As an alternative, NDA operations can be constrained to only access data within a physical memory region that is contiguous in the virtual address space. Hence, translation is performed by the host when targeting an NDA command at a certain physical address. This has been proposed for both very fine-grain NDA operations within single cache lines~\\cite{ahn2015pim,ahn2016scalable,bssync,kim2017toward,nai2017graphpim} and NDA operations within a virtual memory page~\\cite{oskin1998active}.}\n\\medel{Before the host and\/or NDAs accesses memory, logical-to-physical address translation should be done. One possible approach is to make the host OS do the address translation for all host and NDA accesses. On the other hand, there are prior work \\cite{hsieh2016accelerating,hong2016accelerating} that attempts to do address translation with NDAs to enable independent NDA execution without host's assist.} In this paper, \\meadd{we use host-based translation because of its low complexity and only check bounds within the NDAs for protection.} \\medel{choose the first approach where the host has direct control over NDAs.}\n\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{NDA Workloads.}}\nWe focus on NDA workloads for which the host inherently cannot outperform an NDA. These exhibit low temporal locality and low arithmetic intensity and are bottlenecked by peak memory bandwidth. By offloading such operations to the NDA, we mitigate the bandwidth bottleneck by leveraging internal memory module bandwidth. Moreover, these workloads typically require simple logic for computation and integrating such logic within DRAM chips\/modules is practical because of the low area and power overhead. \n\n\nFundamental linear algebra matrix and vector operations satisfy these criteria. Dense \\meadd{vector and matrix-vector operations, which are prevalent in machine learning primitives,} are particularly good candidates because of their deterministic and regular memory access patterns and low arithmetic-intensity.\nFor example, prior work off-loads matrix and vector operations of deep learning workloads to utilize high near-memory BW~\\cite{kim2016neurocube,gao2017tetris}.\nAlso, Kwon et al. propose to perform element-wise vector reduction operations needed for a deep-learning-based recommendation system to NDAs~\\cite{kwon2019tensordimm}.\nIn this paper, we focus on accelerating the dense matrix and vector operations summarized in \\tab{tab:nda_ops}. We demonstrate and evaluate their use in the SVRG application in Section \\ref{sec:collaboration}. \\meadd{Note that we use these as a concrete example, but our contributions generalize to other NDA operations.}\n\n\n\nNDA execution of graph processing has also been proposed because graph processing can be bottlenecked by peak memory bandwidth because of low temporal and spatial locality~\\cite{nai2017graphpim,zhang2018graphp,song2018graphr,ahn2016scalable,ahn2015pim}. We do not consider graph processing in this paper because we do not innovate in this context. \\bcut{Prior work either relies on high inter-chip communication to support the irregular access patterns of graph applications, or focuses on fine-grain cache-block oriented NDA operations rather than coarse-grain operations. The former is incompatible with our economic main-memory context and our research offers nothing new if only fine-grain NDA operations are used.}\n\n\n\n\\hpcacut{\n\\smallskip}%{\\medskip\n\\noindent\\textbf{\\textit{NDA Instruction Granularity.}}\n\nAddresses used in user program are mapped to DRAM address in two steps: OS's address translation and memory controller's address mapping. The granularity of address translation is a \\textit{page}, which is typically 4KB in conventional systems and more coarse-grain (2MB and 1GB) pages are used in the systems using huge-page policies. On the other hand, the granularity of address mapping is a cache block (CB), which is typically 64B in CPU systems. Since data within cache block is contiguous in both logical and DRAM address spaces, once the DRAM address of a CB is determined, the host and NDAs will have the same view on the data within the CB. Under direct host control on NDAs, this enables simple programming models for NDA operations and, for this reason, prior work \\cite{ahn2015pim} has adopted NDA instructions that operate on each CB, which we call \\textit{fine-grain NDA instruction}. However, as more NDA devices are connected to the shared bus, more NDA instructions should be sent through the bus and, eventually, NDA performance will be bottlenecked by command bandwidth limitation. This also affects the host performance as contention on the bus increases. \n\nTo solve this problem, our approach is to enable \\textit{coarse-grain NDA instructions}. Each NDA instruction results in longer execution time so that, with less instructions, NDAs can remain active. The main challenge is how to enable this without going through address decoding steps that are required to figure out the DRAM address that NDAs have to access next. \n}\n\n\n\n\\section{Evaluation}\n\\label{sec:evaluation}\n\n\nWe present evaluation results for the various Chopim mechanisms, analyzing:\n(1) the benefit of coarse-grain NDA operations; (2) how bank partitioning improves NDA performance; (3) how stochastic issue and next-rank prediction mitigate read\/write turnarounds; (4) the impact of NDA workload write intensity and load imbalance; (5) how Chopim compares with rank partitioning; (6) the benefits of collaborative and parallel CPU\/NDA processing; and (7) energy efficiency.\n\\meadd{All results rely on the replicated FSM to enable using DDR4.}\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Coarse-grain NDA Operation.}}\n\\fig{fig:cgnda} demonstrates how overhead for launching NDA instructions can degrade performance of the host and NDAs as rank count increases. To prevent other factors, such as bank conflicts, bank-level parallelism, and load imbalance from affecting performance, we use our BP mechanism, the NRM2 operation (because we can precisely control its granularity), and asynchronous launch. We run the most memory-intensive application mix (mix1) on the host. When more CBs are processed by each NDA instruction, contention between host transactions and NDA instruction launches decreases and performance of both improves. In addition, as the number of ranks grows, contention becomes severe because more NDA instructions are necessary to keep all NDAs busy. These results show that our data layout that enables coarse-grain NDA operation is beneficial, especially in concurrent access situation.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\n Takeaway 1:\n Coarse-grain NDA operations are crucial for mitigating contention on the host memory channel.\n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{eval\/cgnda.pdf}\n\t\\caption{Impact of coarse-grain NDA operations. (X-axis: the number of cache blocks accessed per NDA instruction.)}\n\t\\label{fig:cgnda}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\\medskip\n\\noindent\\textbf{\\textit{Impact of Bank Partitioning.}}\n\\fig{fig:eval_bpart_vs_bshar} shows performance when banks are shared or partitioned between the host and NDAs\\mereplace{}{ which access different data}. We emphasize the impact of write intensity of NDA operations by running the extreme DOT (read intensive) and COPY (write intensive) operations. \\meadd{While not shown, SVRG falls roughly in the middle of this range.} We compare each memory access mode with an idealized case where we assume the host accesses memory without any contention and NDAs can leverage all the idle rank bandwidth without considering transaction types and other overheads. \n\nOverall, accelerating the read-intensive DOT with concurrent host access does not affect host performance significantly even with our aggressive approach. However, contention with the shared access mode significantly degrades NDA performance. This is because of the extra bank conflicts caused by interleaving host and NDA transactions. On the other hand, accelerating the write-intensive COPY degrades host performance. This happens because, in the write phase of NDAs when the NDA write buffer drains, the host reads are blocked while NDAs keep issuing write transactions due to long write-to-read turnaround time. To mitigate this problem, we show the impact of our write throttling mechanisms below. Note that host performance of mix0 is the lowest, despite its doubled core count, because contention for LLC increases and memory performance dominates overall performance.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\n Takeaway 2: Bank partitioning increases row-buffer locality and substantially improves NDA performance, especially for read-intensive NDA operations.\n \n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{eval\/bpart_vs_bshar.pdf}\n\t\\caption{Concurrent access to different memory regions.}\n\t\\label{fig:eval_bpart_vs_bshar}\n\\end{figure}\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Mitigating NDA Write Interference.}}\n\\fig{fig:eval_mech_nda_write} shows the impact of mechanisms for write-intensive NDA operations. In this experiment, the most write-intensive operation, COPY, is executed by NDAs and the mechanisms are applied only during the write phase of NDA execution. Stochastic issue is used with two probabilities, 1\/4 and 1\/16, which clearly shows the host-NDA performance tradeoff compared to next-rank prediction. \n\nFor stochastic issue, the tradeoff between host and NDA performance is clear. If NDAs issue with high probability, host performance degrades. The appropriate issue probability can be chosen with heuristics based on host memory intensity though we do not explore this in this paper. On the other hand, the next-rank prediction mechanism shows slightly better behavior than the stochastic approach. Compared to stochastic issue with probability 1\/16, both host and NDA performance are higher. Stochastic issue extends the tradeoff range and does not require signaling. \\meadd{We use the robust next-rank prediction approach for the rest of the paper.}\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\nTakeaway 3: Throttling NDA writes mitigates the large impact of read\/write turnaround interference on host performance; next-rank prediction is robust and effective while stochastic issue does not require additional signaling. \n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{eval\/mech_nda_write.pdf}\n\t\\caption{Stochastic issue and next-rank prediction impact.}\n\t\\label{fig:eval_mech_nda_write}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Impact of Write-Intensity and Input Size.}}\n\\fig{fig:eval_nda_workload} shows host and NDA performance when different types of NDA operations are executed with different input sizes. The host application mix with the highest memory intensity (mix1) and the next-rank prediction mechanism is used. In addition, to identify the impact of input size, three different vector sizes are used: small (8KB\/rank), medium (128KB\/rank), and large (8MB\/rank). We evaluate asynchronous launches with the small vector size. We evaluate GEMV with three matrix sizes, where the number of columns is equal to each of the three vector sizes and the number of rows fixed at 128.\n\nOverall, performance is inversely related to write intensity, and short execution time per launch results in low NDA performance. The NRM2 operation with the small input has the shortest execution time. Because of its short execution time, NRM2 is highly impacted by the launching overhead and load imbalance caused by concurrent host access. On the other hand, GEMV executes longer than other operations and it is impacted less by load imbalance and launching overhead. With the asynchronous launch optimization, the impact of load imbalance decreases and NDA bandwidth increases.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\nTakeaway 4: Asynchronous launch mitigates the load imbalance caused by short-duration NDA operations.\n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.46\\textwidth]{eval\/nda_workload_size.pdf}\n\t\\caption{Impact of NDA operations and operand size.}\n\t\\label{fig:eval_nda_workload}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Scalability Comparison.}}\n\\fig{fig:eval_scal} compares Chopim with the performance of rank partitioning (RP). For RP, we assume that ranks are evenly partitioned between the host and NDAs. Since read- and write-intensive NDA operations show different trends, we separate those two cases. Other application results (SVRG, CG, and SC) are shown to demonstrate that their performance falls between these two extreme cases.\\bcut{We do not evaluate SVRG with RP because it disallows sharing.} We use the most memory-intensive mix1 as the host workload. \nThe first cluster shows performance when the baseline DRAM system is used. For both the read- and write-intensive NDA workloads, Chopim performs better than rank partitioning. This shows that opportunistically exploiting idle rank bandwidth can be a better option than dedicating ranks for acceleration. The second cluster shows performance when the number of ranks is doubled. Compared to rank partitioning, Chopim shows better performance scalability. While NDA bandwidth with rank partitioning exactly doubles, Chopim more than doubles due to the increased idle time per rank. SVRG results fall between extreme DOT and COPY cases. \n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\n Takeaway 5: Chopim scales better than rank partitioning because short issue opportunities grow with rank count.\n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.43\\textwidth]{eval\/scalability.pdf}\n\t\\caption{Scalability Chopim vs.~rank partitioning.}\n\t\\label{fig:eval_scal}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\\medskip\n\\noindent\\textbf{\\textit{SVRG Collaboration Benefits.}}\n\\fig{fig:eval_svrg} shows the convergence results with and without NDA (8 NDAs). We use a shared memory region to enable concurrent access to the same data and the next-rank prediction mechanism is used. Compared to the host-only case, the optimal epoch size decreases from \\textit{N} to \\textit{N\/4} when NDAs are used. This is because the overhead of summarization decreases relative to the host-only case. Furthermore, SVRG with delayed updates gains additional performance demonstrating the benefits made possible by the concurrent host and NDA access when each processes the portion of the workload it is best suited for. Though the delayed update updates the correction term more frequently, the best performing learning rate is lower than ACC with epoch \\textit{N\/4}, which shows the impact of staleness on the delayed update.\n\nWhen NDA performance grows by adding NDAs (additional ranks), delayed-update SVRG demonstrates better performance scalability. \\fig{fig:eval_svrg_speedup} compares the performance of the best-tuned serialized and delayed-update SVRG with that of host-only with different number of NDAs. We measure performance as the time it takes the training loss to converge (when it reaches $1e-13$ away from optimum). Because more NDAs can calculate the correction term faster, its staleness decreases, consequently, a higher learning rate with faster convergence is possible.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\nTakeaway 6: Collaborative host-NDA processing on shared data speeds up SVRG logistic regression by 50\\%. \n\\end{minipage}}\n\n\\begin{figure}[t!]\n\\centering\n\t\\subfloat [Convergence over time with and without NDA.] {\n\t\t\\includegraphics[width=0.43\\textwidth]{eval\/svrg.pdf}\n\t\t\\label{fig:eval_svrg}\n\t} \\\\\n\t\\subfloat [NDA speedup scaling (normalized to host only).] {\n\t\t\\includegraphics[width=0.37\\textwidth]{eval\/svrg_speedup.pdf}\n\t\t\\label{fig:eval_svrg_speedup}\n\t} \\\\\n\t\\caption{Impact of NDA summarization in SVRG with and without delayed update (HO: Host-Only, ACC: Accelerated with NDAs, ACC\\_Best: Best among all ACC options).}\n\t\\label{fig:eval_svrg_results}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\\medskip\n\\noindent\\textbf{\\textit{Memory Power.}}\nWe estimate the power dissipation in the memory system under concurrent access. The theoretical maximum possible power of the memory system is 8W when only the host accesses memory. When the most memory-intensive application mixes are executed, the average power is 3.6W. The maximum power of NDAs is 3.7W and is dissipated when the scratchpad memory is maximally used in the average gradient computation. In total, up to 7.3W of power is dissipated in the memory system, which is lower than the maximum possible with host-only access. This power efficiency of NDAs comes from the low-energy internal memory accesses and because Chopim minimizes overheads.\n\n\\medskip\n\\noindent\\fbox{\\begin{minipage}{0.46\\textwidth}\nTakeaway 7: Operating multiple ranks for concurrent access does not increase memory power significantly. \n\\end{minipage}}\n\n\n\n\\section{Conclusion} \n\\label{sec:conclusion}\n\nIn this paper, we introduced solutions to share ranks and enable concurrent access between the host and NDAs. Instead of partitioning memory in coarse-grain manner, both temporally and spatially, we interleave accesses in fine-grain manner to leverage the unutilized rank bandwidth. To maximize bandwidth utilization, Chopim enables coordinating state between the memory controllers of the host and NDAs in low overhead, to reduce extra bank conflicts with bank partitioning, to efficiently block NDA write transactions with stochastic issue and next-rank prediction to mitigate the penalty of read\/write turnaround time, and to have one data layout that allows the host and NDAs to access the same data and realize high performance. Our case study also shows that collaborative execution between the host and NDAs can provide better performance than using just one of them at a time. Chopim offers insights to practically enable NDA while serving main memory requests in real systems and enables more effective acceleration by eliminating data copies and encouraging tighter host-NDA collaboration.\n\n\n\\section{Introduction} \n\\label{sec:intro}\n\nProcessing data in or near memory using \\emph{near data accelerators} (NDAs) is attractive for applications with low temporal locality and low arithmetic intensity. NDAs help by \nperforming computation close to data, saving power and utilizing proximity to overcome the bandwidth bottleneck of a main memory ``bus'' (e.g.,~\\cite{stone1970pim,kogge1994execube,gokhale1995processing,kogge1997processing,patterson1997case,kang1999flexram,guo20143d,farmahini2015nda,ahn2015pim,ahn2016scalable,asghari2016chameleon,gao2017tetris,alian2018nmp,alian2019netdimm,liu2018processing,boroumand2019conda}).\nDespite decades of research and recent demonstration of true NDA technology~\\cite{upmem,alian2018nmp,ibm_pim_dimm,pawlowski2011hybrid,nair2015active}, many challenges remain for making NDAs practical, especially in the context of \\emph{main-memory NDA}.\n\nIn this paper we address several of these outstanding issues in the context of an NDA-enabled main memory. Our focus is on memory that can be concurrently accessed both as an NDA and as a memory. Such memory offers the powerful capability for the NDA and host processor to collaboratively process data without costly data copies. Prior research in this context is limited to fine-grained NDA operations of, at most, cache-line granularity. However, we develop techniques for coarse-grain NDA operations that amortize host interactions across processing entire DRAM rows.\nAt the same time, our NDA does not block host memory access, even when the memory devices are controlled directly by the host (e.g., a DDRx-like DIMM), which can reduce access latency and ease adoption.\n\n\n\n\n\\fig{fig:baseline_nda} illustrates an exemplary NDA architecture, which presents the challenges we address, and is similar to other recently-researched main-memory NDAs~\\cite{farmahini2015nda,asghari2016chameleon,alian2018nmp}. We choose a DIMM-based memory system because it offers the high capacity required for a high-end server's main memory.\nEach DIMM is composed of multiple chips, with one or more DRAM dice stacked on top of a logic die in each chip, using a low-cost commodity 3DS-like approach. Processing elements (PEs) and a memory controller are located on the logic die. Each PE can access memory internally through the NDA memory controller. These local NDA accesses must not conflict with external accesses from the host (e.g., a CPU). A rank that is being accessed by the host cannot at the same time serve NDA requests, though the bandwidth of all other ranks in the channel can be used by the NDAs. \nThere is no communication between PEs\nother than through the host. While not identical, recent commercial NDA-enabled memories exhibit similar overall characteristics~\\cite{upmem,ibm_pim_dimm}. \n\n\n\\meadd{Surprisingly, no prior work on NDA-enabled main memory examines the architectural challenges of simultaneous and concurrent access to memory devices from both the host and NDAs. In this work, we address two key challenges for enabling performance-efficient NDAs in a memory system that supports concurrent access from both a high-performance host and the NDAs.}\n\nThe first challenge is that interleaved accesses may hurt memory performance because they can both decrease row-buffer locality and introduce additional read\/write turnaround penalties. The second challenge is that each NDA can process kernels that consume entire arrays, though all the data that a single operation processes must be local to a PE (e.g., a memory chip). Therefore, enabling cooperative processing requires that host physical addresses are mapped to memory locations (channel, rank, bank, etc.) in a way that both achieves high host-access performance (through effective and complex interleaving) and maintains NDA locality across all elements of all operands of a kernel.\nWe note that these challenges exist when using either a packetized interface, where the memory-side controller interleaves accesses between NDAs and the host, or a traditional host-side memory controller that sends explicit low-level memory commands. \n\n\\begin{figure}[b!ht]\n\\centering\n\t\\includegraphics[width=0.35\\textwidth]{fig\/baseline_nda.pdf}\n\t\\caption{Exemplary NDA architecture.}\n\t\\label{fig:baseline_nda}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\n\n\\hpcacut{\n\\mattan{This paragraph is hard to parse. Should be more precise and direct about setting the context. The first and last sentences, in particular don't flow all that well.} While dedicated memory is used for a discrete NDA, integrating an NDA with main memory offers three significant advantages. First, this allows for economical high-capacity NDAs because the already large host memory is used. Second, copying data prior to acceleration is unnecessary, saving time and energy. Third, the integration enables the fine-grain collaboration between the host processor and the accelerators.\nPrior work on NDA has focused on accelerating kernels and benchmarks without evaluating collaborative processing across both host and NDAs. Our architecture enables such collaboration, and we demonstrate and evaluate its benefits.\n}\n\n\\hpcacut{\nRecent work has started to explore such an NDA, with processing elements and local memory controllers integrated within high-capacity DIMMs~\\cite{farmahini2015nda,asghari2016chameleon,alian2018nmp}. However, this prior work cannot realize the potential of fine-grain interactions between host and NDA---it places constraints on the host's use of memory while the accelerator operates because of how memory accesses are partitioned between host and NDA. \\mattan{Somehow the specific phrasing of the previous sentence with the 'while' isn't all that clear.}\nWe observe that the internal bandwidth of memory modules (with multiple ranks) is unutilized even under intensive host memory access. By opportunistically issuing NDA commands whenever internal bandwidth is available, NDAs can operate without impacting host performance and memory capacity. Exploiting this unutilized rank bandwidth requires fine-grain interleaving between host and NDA transactions because rank idle periods are short. This raises four challenges, which we address in this paper: (1) interleaving host and NDA accesses breaks row-buffer locality and reduces performance by more frequent bank conflicts, (2) access interleaving also increases write-to-read turnaround penalties, (3) data layout in memory must be simultaneously usable for both host and NDAs, and (4) if the host and NDAs attempt to control memory separately, their memory controllers must be coordinated. \\mattan{The previous list is supposed to be exciting and motivating, but it's somehow a bit ``blah''---I'll try to rewrite this at some point.}\n}\n\n\n\n\n\\emph{For the first challenge (managing concurrent access)}, we identify reduced row-buffer locality because of interleaved host requests as interfering with NDA performance. In contrast, it is the increased read\/write turnaround frequency resulting from NDA writes that mainly interfere with the host. We provide two solutions in this context. First, we develop a new bank-partitioning scheme that limits interference to just those memory regions that are shared by the host and NDAs, thus enabling colocating host-only tasks with tasks that use the NDAs. This new scheme is the first that is compatible with huge pages and also with the advanced memory interleaving functions used in recent processors.\nPartitioning mitigates interference from the host to the NDAs and substantially boosts their performance (by $1.5-2\\times$). \n\nSecond,\nwe control interference on shared ranks by opportunistically issuing NDA memory commands to those ranks that are even briefly not used by the host and curb NDA to host interference with mechanisms that can throttle NDA requests, either selectively when we predict a conflict (\\emph{next-rank prediction}) or stochastically\n\n\n\\emph{For the second challenge (NDA operand locality)}, we enable fine-grain collaboration by architecting a new data layout that preserves locality of operands within the distributed NDAs while simultaneously affording parallel accesses by the high-performance host. This layout requires minor modifications to the memory controller and utilizes coarse-grain allocations and physical-frame coloring in OS memory allocation. This combination allows large arrays to be shuffled across memory devices (and their associated NDAs) in a coordinated manner such that they remain aligned in each NDA. This is crucial for coarse-grain NDA operations that can achieve higher performance and efficiency than cacheline-oriented fine-grain NDAs (e.g.,~\\cite{ahn2015pim,kim2017toward,hsieh2016transparent}). \n\n\n\n\\emph{An additional and important challenge} exists in systems where the host maximizes its memory performance by directly controlling memory devices \\meadd{rather than relying on a packetized interface~\\cite{pawlowski2011hybrid,hadidi2018performance}}. Adding NDA capabilities requires providing local memory controllers near memory in addition to the host ones\\meadd{, which introduces a coordination challenge}. We coordinate memory controllers and ensure a consistent view of bank and timing state \\meadd{with only minimal signaling that does not impact performance by replicating the controller finite state machines (FSMs) at both the NDA and host sides of the memory channels}.\nReplicating the FSM requires all NDA accesses to be determined only by the NDA operation (known to the host controller) and any host memory operations. Thus, no explicit signaling is required from the NDAs back to the host. We therefore require that for non-packetized NDAs, each NDA operation has a deterministic access pattern for all its operands (which may be arbitrarily fine-grained). \n\nIn this paper, we introduce \\emph{Chopim}, a SW\/HW holistic solution that enables concurrent host and NDA access to main memory by addressing the challenges above with fine temporal access interleaving to physically-shared memory devices. We perform a detailed evaluation both when the host and NDA tasks process different data and when they collaborate on a single application. We demonstrate that Chopim enables high NDA memory throughput (up to 97\\% of unutilized bandwidth) while maintaining host performance. Performance and scalability are better than with prior approaches of partitioning ranks and only allowing coarse-grain temporal interleaving, or with only fine-grain NDA operations. \n\nWe demonstrate the potential of host and NDA collaboration by studying a machine-learning application (logistic regression with stochastic variance-reduced gradient descent~\\cite{johnson2013accelerating}). We map this application to the host and NDAs such that the host stochastically updates weights in a tight inner loop that utilizes the speculation and locality mechanisms of the CPU while NDAs concurrently compute a correction term across the entire input data that helps the algorithm converge faster. Collaborative and parallel NDA and host execution can speed up this application by $2\\times$ compared to host-only execution and $1.6\\times$ compared to non-concurrent host and NDA execution. We then evaluate the impact of colocating such an accelerated application with host-only tasks.\n\n\nIn summary, we make the following main contributions:\n\\begin{itemize}\n\\item We identify new challenges in concurrent access to memory from the host and NDAs: bank conflicts from host accesses curb NDA performance and read\/write-turnaround penalties from NDA writes lower host performance.\n\\item We reduce bank conflicts with a new bank partitioning architecture that, for the first time, is compatible with both huge pages and sophisticated memory interleaving.\n\\item To decrease read\/write-turnaround overheads, we throttle NDA writes with two mechanisms: \\textit{next-rank prediction} delays NDA writes to the rank actively read by the CPU; and \\textit{stochastic issue} throttles NDA writes randomly at a configurable rate.\n \n \n \n\\item We develop, also for the first time, a memory data layout that is compatible with both the host and NDAs, enabling them to collaboratively process the same data in parallel while maintaining high host performance with sophisticated memory address interleaving.\n\n\\item To show the potential of collaboratively processing the same data, we conduct a case study of an important ML algorithm that leverages the fast CPU for its main training loop and the high-BW NDAs for summarization steps that touch the entire dataset. We develop a variant that executes on the NDAs and CPU in parallel, which increases speedup to 2X.\n\n\\end{itemize}\n\n\n\n\n\n\n\n\n\n\\section{Motivation}\n\\label{sec:motiv}\nWe motivate our work with three key questions for a main-memory NDA, which we later answer with the Chopim architecture.\n\n\\medskip\n\\noindent\\textbf{\\textit{Q1: How can NDA-enabled DRAM be simultaneously used for both compute and host high-capacity memory?}} \n\nAs NDA-equipped memory devices are not only accelerators but also main memory, it is important to effectively manage the situation where memory is simultaneously accessed by both the host and NDAs. \\textit{The main challenge is how to maximize NDA performance and minimize host performance degradation while avoiding conflicts on the shared data path between the host and NDAs.}\nPrior work solves this in three directions: \\textit{unified memory interface}, \\textit{spatial partitioning}, and \\textit{time sharing}. In the first approach, every memory requests for the host and NDAs are managed in one place and issued through the same memory interface. However, as all the memory commands are transferred through the same bus, performance scalability along with the number of ranks is limited by the command-bus bandwidth. The next two approaches solve this by separating the memory interface of the host and NDAs. We also assume that each NDA can independently access memory with its own MC.\n\nThe second approach partitions memory into two independent groups (e.g. group of ranks) and allows the host and NDAs to exclusively access their own group of memory [?]. This approach requires a large fraction of memory capacity to be reserved for NDAs. Moreover, the potential bandwidth gain of NDAs is limited by the number of ranks dedicated to NDAs. In time-sharing, ownership of memory alternates between the host and NDAs and only one of them accesses memory [?]. Before the ownership switches, an existing mechanism initializes all the bank state which incurs overhead due to the initialization and warm-up overhead. Therefore, coarse-grain switching is required to amortize the overhead (see Section ? for more details). However, in this mechanism, how much time is allocated for each processor determines performance of the host and NDAs. \n\nTo mitigate this strict performance tradeoff, our approach leverages the internal memory bandwidth for NDA execution that are unutilized while the host accesses memory. \\fig{fig:motiv_rank_idle} shows the bandwidth utilization of rank internal buses when only memory-intensive host programs are executed. Our application mixes and the baseline configuration are summarized in Table \\ref{tab:eval_config}. Overall, about 60\\% of the internal-bus bandwidth is unused. However, the majority of idle periods are just $10-250$ cycles. Therefore, to utilize these idle periods for NDAs, mechanisms for fine-grain ownership switching is required. \n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{fig\/motiv_rank_idle.pdf}\n\t\\caption{Rank idle-time breakdown vs. idleness granularity.}\n\t\\label{fig:motiv_rank_idle}\n\\end{figure}\n\n\n\n\\medskip\n\\noindent\\textbf{\\textit{How can the unique position of NDAs always sharing memory devices with the host be exploited?}}\n\nWe focus on the uniqueness of NDAs compared to other discrete accelerators. If many different accelerators exist in a system, what will be the reasons to use NDAs? Unlike other accelerators, NDAs share high-capacity memory with the host and can access data with high peak bandwidth. This provides an opportunity of the host and NDAs to access the same data and collaboratively process them in fine-grain manner or even concurrently. The next question would be as follows: is there any application that can benefit from this collaboration enabled by the uniqueness of NDAs? Such application should meet the following requirements: (1) The application should be able to effectively leverage the specialties of each processor, (2) coherence should be handled infrequently, (3) both processors should work on large shared data. We introduce one use case, \\textit{stochastic variance reduced gradient (SVRG)}, that satisfies these requirements, in Section \\ref{sec:collaboration}.\n\nSince the host and NDAs share the same data, we cannot customize data layout for each processor. Prior work \\cite{farmahini2015nda,akin2015hamlet} reorganizes data in between ownership switching but this approach cannot enable fine-grain access interleaving between the host and NDAs to the shared data. Therefore, the main challenge is on how to find an optimal single data layout. When either just the host or just the NDAs own memory for a fairly long time, we can customize data layout for each processor, possibly copying and laying out data differently when switching the memory ownership. However, concurrent NDA and host access to the same data requires a single data layout that works well for both the host and NDA at the same time. Otherwise, two copies of data with data layout are necessary, incurring high capacity overhead.\n\nTo meet the data-layout requirements of both processors, the following things should be considered. The host can maximally exploit bank-level parallelism when data is well-distributed across banks. To accomplish this, modern MCs use complex address hash function for physical-to-DRAM address mapping. On the other hand, NDAs can only fully utilize internal peak bandwidth when operands are all in local memory. \n\n\\medskip\n\\noindent\\textbf{\\textit{How can we practically address the above questions under direct host control for non-packetized DRAMs?}}\n \nTo enable near-data acceleration and still benefit from deterministic and low memory latency of non-packetized DRAMs, the above problems should be resolved under direct host control. Though the host and NDAs seem to only share the internal DRAM bus, in fact, they contend for the command, address, and data (DAC) bus that connects between them since the host should control NDAs directly via the DAC bus. \n\nTo mitigate this contention and realize high host and NDA performance, the following three problems should be resolved without frequent host-NDA communication. First, each memory controller should efficiently track global memory controller state for the fine-grain access interleaving. To track global memory controller state without always initializing it before the ownership switching, both MCs should frequently communicate each other. However, this method should be prevented to mitigate the contention on the DAC bus. \n\nLastly, to minimize the number of NDA instructions launched by the host, each NDA instruction should include a lot of information in a compact format (which we call \\textit{macro NDA instruction}) so that a few NDA instructions can make NDAs busy for a fairly long time. However, as addresses of operands are determined by the OS and host's memory controller, they should be calculated by the host and sent to NDAs for each cache block access of NDAs, such as PEI \\cite{ahn2015pim}. Therefore, to enable macro NDA instructions, NDAs should figure out the address of next operands without host's help. \n\n\\medskip\n\\noindent\\textbf{\\textit{Summary}}\nTo support host-NDA concurrent access to the same memory devices, the following problems should be resolved. \n\n\\begin{itemize}[topsep=.5ex,itemsep=.5ex,partopsep=1ex,parsep=0ex]\n\t\\item Fine-grain ownership switching is required to efficiently share the same memory among independent host and NDA processes. \n\t\\item A single data layout is required for the data shared between the host and NDAs to avoid capacity and performance overhead while collaboratively processing them.\n\t\\item To solve the above problems under direct host control, the host and NDAs should minimally communicate while other necessary things, such as state tracking and instruction launching, are supported. \n\\end{itemize}\n\n\n\n \n \n\n\n\n\\section{Host-NDA Collaboration}\n\\label{sec:collaboration}\n\nIn this section, we describe a case study to show the potential of concurrent host-NDA execution by collaboratively processing the same data. Our case study shows how to partition \\meadd{an ML training task between the host and NDAs such that each processor leverages its strengths. As is common to training and many data-processing tasks, the vast majority of shared data is read-only, simplifying parallelism.}\n\\medel{Also, our case study is a good example since infrequent and low-overhead operations are required to maintain coherence while the host and NDAs can independently access large and shared read-only data of which access time dominates the overall execution time.}\n\nWe use the machine-learning technique of logistic regression with stochastic variance reduced gradient (SVRG)~\\cite{johnson2013accelerating} as our case study. \\medel{SVRG is a machine learning technique that enables faster convergence by reducing variance introduced by sampling.} \\fig{fig:svrg} shows a simplified version of SVRG and the opportunity for collaboration.\n\\meadd{The algorithm consists of two main tasks within each outer-loop iteration. First, the entire large input matrix \\textit{A} is \\emph{summarized} into a single vector \\textit{g} (see \\fig{fig:impl_avg_grad_example_code} for pseudocode). This vector is used as a correction term when updating the model in the second task. This second task consists of multiple inner-loop iterations. In each inner-loop iteration the learned model \\textit{w} based on a randomly-sampled vector \\textit{a} from the large input matrix \\textit{A}, the correction term \\textit{g}, and a stored model \\textit{s}, which is updated at the end of the outer-loop iteration. }\n\\medel{\nA large input matrix, \\textit{A}, is evenly partitioned into multiple tiles and stored in memory. In every inner-loop iteration, the host samples a random element \\textit{a} within \\textit{A} to update the learned model \\textit{w}. Other than the large input, other data (\\textit{w, s,} and \\textit{g}) takes advantage of the CPU caches. The tight inner loop is therefore ideally suited for high-end CPU execution.\n}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{fig\/svrg.pdf}\n\t\\caption{Collaboration between host and NDAs in SVRG.}\n\t\\label{fig:svrg}\n\t\\vspace*{-4mm}\n\\end{figure}\n\nThe first task is an excellent match for the NDAs. \n\\medel{The SVRG algorithm periodically calculates a correction term, \\textit{g}, by \\textit{summarizing} the entire input data (example code in \\fig{fig:impl_avg_grad_example_code}). Because} The summarization operation is simple, exhibits little reuse, and traverses the entire large input data. \\medel{, it is ideally suited for the NDAs The term \\textit{g} is used for correcting error in the host workload, \\textit{f}. With Chopim,}\nIn contrast, the second task with its tight inner loop is well suited for the host. The host can maximally exploit locality captured by its caches while NDAs can leverage their high bandwidth for accessing the entire input data \\textit{A}. Note that in SVRG, an \\textit{epoch} refers to the number of inner loop iterations.\n\nThe main tradeoff in SVRG is as follows. When summarization is done more frequently, the quality of the correction term increases and, consequently, the per-step convergence rate increases. On the other hand, the overhead of summarization also increases when it is performed more frequently, which offsets the improved convergence rate. Therefore, the \\textit{epoch} hyper-parameter, which determines the frequency of summarization, should be carefully selected to optimize this tradeoff.\n\n\\medskip\n\\noindent\\textbf{\\textit{Delayed-Update SVRG.}}\nAs Chopim enables concurrent access from the host and NDAs, we explore an algorithm change to leverage collaborative parallel processing. Instead of alternating between the summarization and model update tasks, we run them in parallel on the host and NDAs. Whenever the NDAs finish computing the correction term, the host and NDAs exchange the correction term and the most up-to-date weights before continuing parallel execution. While parallel execution is faster, it results in using stale \\textit{s} and \\textit{g} values from one epoch behind. The main tradeoff in \\textit{delayed-update SVRG} is that per-iteration time is improved by overlapping execution, whereas convergence rate per iteration degrades due to the staleness. Similar tradeoffs have been observed in prior work \\cite{bengio2003neural,langford2009slow,recht2011hogwild,dean2012large}. \\meadd{We later show that delayed-update SVRG can converge in $40\\%$ less time than when serializing the two main SVRG tasks.}\n\nTo avoid races for \\textit{s} and \\textit{g} in this delayed-update SVRG, we maintain private copies of these small variables and use a memory fence that guarantees completion of DRAM writes after the data-exchange step (which the runtime coordinates with polling). Note that we bypass caches when accessing data produced\/consumed by NDAs during the data-exchange step. Since \\textit{s} and \\textit{g} are small and copied infrequently, the overheads are small and amortized over numerous NDA computations. Whether delayed updates are used or not, the host and NDAs share the large data, \\textit{A}, without copies.\n\n\n\n\\section{Chopim}\n\\label{sec:chonda}\n\nIn this section, we present a set of solutions to enable concurrent access to the same memory and realize high performance.\n\n\n\\subsection{Opportunistic NDA Issue}\n\\label{subsec:opportunistic_nda_issue}\n\nThe basic policy for Chopim is to aggressively leverage the unutilized rank bandwidth by issuing NDA commands whenever possible. That is, if no incoming host request is detected, NDAs always issue their commands. Since the NDAs can issue whenever there is an opportunity, this maximizes bandwidth utilization.\nOne potential problem is that an NDA command issued in one cycle may delay a host command that could have issued in the next. Fortunately, read transactions of NDAs have a small impact on following host commands and ACT and PRE commands are issued infrequently by NDAs. \\mdel{However, if an NDA issues a write transaction in one cycle and the next host command is a read, the penalty of write-to-read turnaround is not negligible. We address this below.}\n\n\n\\subsection{Throttling NDA Writes}\n\\label{subsec:block_nda_write}\n\nWhen NDAs aggressively issue write commands, the read transactions of the host are blocked due to the penalty for interleaving write-read transactions while write transaction of NDAs keep issuing again since the rank is idle. This degrades host performance while improving NDA performance. To avoid this starvation problem, Chopim provides mechanisms to throttle NDA write transactions. \n\nThe first mechanism is to issue NDA writes with a predefined probability, reducing the rate of NDA writes. We call this mechanism \\textit{stochastic NDA issue}. When NDAs detect rank idleness, they flip a coin and decide whether to issue a write transaction or not. By adjusting the probability, the performance of host and NDAs can be traded off.\n\nThe second approach is to predict the rank that the host is going to access next and prevent NDA writes in that rank from issuing. The prediction is based on the requests waiting in the transaction queue. \nWith our baseline FRFCFS scheduler, we observe that the oldest request in the queue will be served next with high likelihood even though the scheduler can reorder memory operations. In other words, even a simple heuristic can be used to predict the next memory scheduler decision about which rank the host will use. Because the host MC knows whether the NDA that accesses that rank is in write mode or not, it can decide to throttle the NDA. For the packetized memory interface, such as HMC, no interface change is required to enable this mechanism since one memory controller has all the required information and control over memory requests from both sides. However, for DDR interface, we assume a dedicated pin is available for signaling the NDA to block its next write.\n\n\n\n\n\\subsection{Shared vs.~Partitioned NDA\/Host Regions}\n\\label{subsec:tradeoff_cap_perf}\n\nChopim offers two approaches for utilizing main memory by NDAs. The first is to partition memory such that NDAs only access one subset of memory while regular host loads and stores access the complement subset. Accesses to memory from NDAs and the host are still concurrent. In the second mode, the host and NDAs concurrently access the same range of addresses. The partitioned mode provides isolation and reduces interference. However, partitioning removes physical addresses from the host and assigns them to the NDA. Hence, changing partitioning decisions should be done infrequently, reducing flexibility. \\madd{On the other hand,} Sharing provides flexibility in which data is processed by the NDA and eliminates the need for data copies. The two modes can be mixed with a portion of memory dedicated to NDA access and other regions that are shared, though we do not explore mixed isolation and sharing in this paper.\n\nWe address two main challenges to enable the two modes above. \nThe challenge with sharing addresses between host and NDAs is in how data is laid out across memory locations. The typical layout for the host spreads data across DRAM chips and ranks, but chip- and rank-locality is required for the NDAs. We resolve this data alignment issue by rearranging data at the host memory controller for chip-locality and relying on physical frame contiguity or page coloring by the OS.\nWe use \\textit{bank partitioning} to provide isolation and minimally impact host performance. We develop a new bank partitioning mechanism that permits the use of sophisticated physical-to-DRAM address mapping functions, \\madd{even when huge pages that span many banks are used}. \n\n\n\n\n\n\n\n\n\n\\mcut{Our simulation results \nThe second memory option is the large-shared memory where the host and NDA transactions contend for the same banks. As a result, extra bank conflicts occur while interleaving transactions and this significantly degrades NDA performance. On the other hand, NDAs can be used without worrying about the capacity limit in this large-shared memory region. However, to allow concurrent access to the same data in this region, a data alignment issue should be resolved.\nIn the large-shared memory, the baseline address mapping is used and its address hashing causes a data alignment issue for NDAs. Typically, address hashing is used to shuffle data across banks in the system but this hinders localizing operands of NDA operations.\n\\mattan{To here. Too long to make point.} \\benjamin{Done.} To solve this problem, we apply the same page coloring and remapping mechanism in a slightly different way. The detailed mechanism is explained in Section \\ref{subsec:impl_data_layout}.\n}\n\n\n\n\n\n\n\n\\subsubsection{Data Layout for Shared Addresses}\n\\label{subsec:impl_data_layout}\n\nData layout in a shared region is challenging because the host and NDAs have different constraints or preferences for data layout: the host prefers spreading addresses across chips, ranks, banks, and channels to maximize bandwidth and reduce bank conflict likelihood while the NDAs require contiguity within chips and ranks. To satisfy both, we focus on laying out data at the chip (device) and rank levels. \n\n\n\\medskip\n\\noindent\\textbf{\\textit{Data Layout Across DRAM Chips.}}\nIn the baseline system, each \\rev{4-byte} word is striped across multiple chips,\nwhereas in our approach each word is located in a single chip so that NDAs can access words from their local memory.\nBoth the host and NDAs can access memory without copying or reformatting data (as required by prior work~\\cite{farmahini2015nda}). Memory blocks still align with cache lines, so this layout change is not visible to software. This layout precludes the critical word first optimization from DRAM, but recent work concludes the impact is minimal because the relative latency difference in current memory systems is very small (e.g.,~\\cite{yoon2012dgms}). \n\n\n\\medskip\n\\noindent\\textbf{\\textit{Data Layout Across Ranks.}}\nA typical physical to DRAM address hash-based mapping rearranges contiguous addresses across multiple DRAM ranks and banks, breaking the locality needed for NDAs. \\fig{fig:rank_layout} shows an example of naive data layout by using the baseline address mapping (left) and the desired layout for shared regions (right). \\madd{In the naive layout}, the first element of vector ${A}$ and that of vector ${B}$ are located in different ranks, breaking NDA locality. That same data is in the same rank in the desired layout, while still satisfying host access requirements \\madd{for high bandwidth and low bank contention}. We achieve the desired behavior by mapping NDA operands that are used together to groups of frames that all have the same \\sout{shuffling} \\rev{interleaving} pattern across channels and ranks. In this way, as long as their initial column alignment is the same, all operands remain aligned even though elements are spread across banks and ranks.\n\n\n\\madd{Our current implementation achieves this using coarse-grain memory allocation and coloring. \nWe allocate memory such that operands are aligned at the granularity of one DRAM row for each bank in the system which we call a \\textit{system row} (e.g., 2MB for a DDR4 1TB system). This is simple with the common buddy allocator if allocation granularity is also a system row, and can use optimizations that already exist for huge pages~\\cite{yun2014palloc,kwon2016coordinated,gorman2004understanding}. The fragmentation overheads of coarse allocation are negligible because there is little point in attempting NDA execution for small operands.}\n\n\\madd{In our baseline address mapping~\\ref{fig:baseline_addr_map}, the rank and channel addresses are determined partly by the low-order bits that fall into the frame offset field and partly by the high-order bits that fall into the physical frame number (PFN) field. The frame offsets of operands are kept the same because of the above alignment. On the other hand, PFNs are determined by the OS. Therefore, to keep those high-order bits the same among operands, the Chopim runtime indicates a \\textit{shared color} when it requests memory from the OS and uses the same color for all operands of a specific NDA operation. The OS uses the color information to ensure that all operands with the same color (same shared region) follow the same channel and rank interleaving pattern. To do this, the OS needs to know which physical address bits are used to select ranks and channels by the host memory controller. Coloring limits the region size because the bits that determine rank and channel must have the same value within the region. Multiple regions can be allocated for the same process. \\rev{Though we focus on one address mapping here, we believe our approach of coarse-grain and address-mapping-aware allocation can be generalized and applied to other address mappings as well.}}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}[t!]\n\t\\centering\n\t\\begin{minipage}[t]{0.95\\textwidth}\n\t\t\\begin{minipage}[t]{0.6\\textwidth}\n\t\t\n\t\t\t\\subfloat [Data layout across ranks for concurrent access.] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{fig\/rank_layout.pdf}\n\t\t\t\t\\label{fig:rank_layout}\n\t\t\t}\n\t\t\\end{minipage} %\n\t\t\\quad %\n\t\t\\begin{minipage}[t]{0.4\\textwidth}\n\t\t\t\\subfloat [Baseline address mapping] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{..\/fig\/baseline_addr_map.pdf}\n\t\t\t\t\\label{fig:baseline_addr_map}\n\t\t\t} \\\\\n\t\t\t\\subfloat [Host-side address mapping for bank partitioning.] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{..\/fig\/hashing_addr_map.pdf}\n\t\t\t\t\\label{fig:hashing_addr_map}\n\t\t\t}\n\t\t\\end{minipage}\n\t\\end{minipage}\n\t\\caption{Data layout (shared region) and bank partitioning (partitioned region).}\n\t\\label{fig:addr_map}\n\n\\end{figure*}\n\n\\subsubsection{Bank Partitioning}\n\\label{subsec:impl_bpart}\n\nPrior work on bank partitioning relies on the OS to understand how physical addresses are mapped to banks~\\cite{mi2010bankpark,jeong2012balancing,liu2012software}. The OS colors pages to assign them to different bank partitions and then allocate frames that map to a specific set of banks to a specific set of colors. Memory accesses for different colors are then isolated to different banks. Colors can be assigned to different cores or threads, or in our case, for host and NDA isolated use. Unfortunately, advanced physical-to-DRAM address mapping functions and the use of 2MB pages prevent prior bank partitioning schemes from working because the physical frame number (PFN) bits that the OS can control can no longer specify arbitrary bank partitions. \n\n\\fig{fig:baseline_addr_map} shows an example of a modern physical address to DRAM address mapping \\cite{pessl2016drama}. One color bit in the baseline mapping belongs to the page offset field so bank partitioning can, at best, be done at two-bank granularity. More importantly, when huge pages are used (e.g., 2MB), this baseline mapping cannot be used to partition banks at all. \n\nTo overcome this limitation, we propose a new interface that partitions banks into two groups---host- and NDA-reserved banks---with flexible DRAM address mapping and any page size. Specifically, our mechanism only requires that the most significant physical address bits are only used to determine DRAM row address, as is common in recent hash mapping functions, as shown in \\fig{fig:hashing_addr_map}.\n\nWithout loss of generality, assume 2 banks out of 16 banks are reserved for the NDA. First, the OS splits the physical address space for host and NDA with the host occupying the bottom of the address space: $0-\\left(14\\times\\mathit{(bank\\_capacity)}-1\\right)$. The rest of the space (with the capacity of 2 banks) is reserved for the NDA and the OS does not use it for other purposes. This guarantees that the most significant bits (MSBs) of the host address are never b'111. In contrast, addresses in the NDA space always have b'111 in their MSBs.\n\nThe OS informs the memory controller that it reserved 2 banks (the topmost banks) for NDAs. Host memory addresses are mapped to DRAM locations using any hardware mapping function, which is not exposed to software and the OS. The idea is then to remap addresses that initially fall into NDA banks into the reserved address space that the host is not using. Additional simple logic checks whether the resulting DRAM address bank ID of the initial mapping is an NDA reserved bank. If they are not, the DRAM address is used as is. If the DRAM address is initially mapped to a reserved bank, the MSBs and the bank bits are swapped. Because the MSBs of a host address are never b'1110 or b'1111, the final bank ID will be one of the host bank IDs. Also, because the bank ID of the initial mapping result is 14 or 15, the final address is in a row the host cannot access with the initial mapping and there is no aliasing.\n\n\\medskip\n\\noindent\\textbf{\\textit{Host Access to NDA Region.}}\nThe OS does not allocate regular host addresses in the NDA region, but some host requests will read and write NDA data to provide initial inputs to NDA operations and read outputs. Requests with addresses that map to the NDA region use a mapping function that does not hash bank addresses and simply uses the address MSBs for banks. This ensures that NDA addresses only access NDA-reserved banks. Furthermore, the way other address bits are mapped to DRAM addresses is kept simple and exposed to software. \n\n\n\n\\subsection{Tracking Global Memory Controller State}\n\\label{subsec:track_gstate}\n\nUnlike conventional systems, Chopim also enables an architecture that has two memory controllers (MCs) managing the bank and timing state of each rank. \\madd{This is the case when the host continues to directly manage memory even when the memory itself is enhanced with NDAs}, which requires coordinating rank state information. Information about host transactions is easily obtained by the NDA MCs as they can monitor incoming transactions and update the state tables accordingly. However, the host MC cannot track NDA transactions due to command bandwidth limits. \n\nTo solve this problem, we leverage the deterministic execution flow of the NDA workloads that we focus on. Once the base address of operands and the operation type are determined, NDAs access memory deterministically and are controlled by microcode and a finite state machine (FSM). If the FSMs are replicated and located also in the host side, the host MC effectively tracks all NDA transactions. Also, the host MC knows its next transaction and target rank. With this information, the host MC deduces which NDA will be affected by its transactions and for how long. Thus, the host MC can track the global state in real time without any communication. \\madd{Stochastic NDA issue is still easily supported by replicating the pseudo-random number generator.}\n\nFigure \\ref{fig:repl_fsm} shows example operations of the replicated FSMs where one rank is being accessed by the host (left) and the other by an NDA (right). When the host accesses rank0, both host and NDA memory controller state is updated based on the issued host command. On the other hand, when the host does not access but NDA1 accesses rank1 (right), the host-side replicated FSM updates its state of rank1 in the host memory controller. This mechanism is not required when only a single memory controller exists for each rank, as with a packetized memory interface~\\cite{pawlowski2011hybrid,genz2017genz,kim2013memory}.\n\n\\medskip\n\\noindent\\textbf{\\textit{Discussion.}} \\madd{A current limitation of our replicated-FSM approach is that it applies to workloads with generally data-independent behavior \\rev{where execution flow is not determined by data values}. This includes a rich and important set of applications and kernels that are of practical importance. We leave extensions to more data-dependent workload behavior to future work. We also note that the particular synchronization problem does not exist in a packetized memory interface, while the other problems we address still do. We consider this work a starting point on illuminating and solving a set of problems for NDAs that share their physical memory with the host and where applications tightly collaborate across the host and NDAs.}\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.37\\textwidth]{fig\/repl_fsm.pdf}\n\t\\caption{Example operation of replicated FSMs.}\n\t\\label{fig:repl_fsm}\n\n\\end{figure}\n\n\n\n\n\n\n\n \n\n\n\\section{Methodology}\n\\label{sec:method}\n\nTable \\ref{tab:eval_config} summarizes our system configuration, DRAM timing parameters, energy components, benchmarks, and machine learning configurations. For bank partitioning, we reserve one bank per rank for NDAs and the rest for the host. We use Ramulator \\cite{kim2016ramulator} as our baseline DRAM simulator and add the NDA memory controllers and PEs to execute the NDA operations. We modify the memory controller to support the Skylake address mapping~\\cite{pessl2016drama} and our bank partitioning and data layout schemes. To simulate concurrent host accesses, we use gem5 \\cite{binkert2011gem5} with Ramulator. We choose host applications that have various memory intensity from the \\textit{SPEC2006} \\cite{henning2006spec} and \\textit{SPEC2017} \\cite{panda2018wait} benchmark suites and form 9 different application mixes with different combinations (Table \\ref{tab:eval_config}). Mix0 and mix8 represent two extreme cases with the highest and lowest memory intensity, respectively. Only mix0 is run with 8 cores to simulate under-provisioned bandwidth while other mixes use 4 cores to simulate a more realistic scenario. \\bcut{Since Chopim is important only when the host processor and PEs concurrently try to access memory, we only show the results of benchmarks with medium and high memory intensity. We also ran simulations with low memory intensity benchmarks and the performance impact due to contention is negligible.} For the NDA workloads, we use DOT and COPY operations to show the impact of extremely low and high write intensity. We use the average gradient kernel (\\fig{fig:impl_avg_grad_example_code}) to evaluate collaborative execution. The performance impact of other NDA applications falls between DOT and COPY and is well represented by SVRG {\\cite{johnson2013accelerating}}, conjugate gradient (CG) {\\cite{jacob2013eigen}} and streamcluster (SC) {\\cite{pisharath2005nu}}.\n\nFor the host workloads, we use Simpoint \\cite{hamerly2005simpoint} to find representative program phases and run each simulation until the instruction count of the slowest process reaches 200M instructions. If an NDA workload completes while the simulation is still running, it is relaunched so that concurrent access occurs throughout the simulation time. Since the number of instructions simulated is different, we measure instructions per cycle (IPC) for the host performance. To show how well the NDAs utilize bandwidth, we show bandwidth utilization and compare with the idealized case where NDAs can utilize all the idle rank bandwidth. \n\n\nWe estimate power with the parameters in Table~\\ref{tab:eval_config}. We use CACTI 6.5~\\cite{muralimanohar2009cacti} for the dynamic and leakage power of the PE buffer. A sensitivity study for PE parameters exhibits that their impact is negligible. We use CACTI-3DD~\\cite{chen2012cacti} to estimate the power and energy of 3D-stacked DRAM and CACTI-IO~\\cite{jouppi2015cacti} to estimate DIMM power and energy.\n\n\\begin{table}[!t]\n\t\\centering\n\t\\noindent\\resizebox{\\linewidth}{!}{\n\t\t\\tabulinesep=0.6mm\n\t\t\\begin{tabu}{c|c|[1.0pt]c|c|c}\n\t\t\t\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{System configuration} \\tabularnewline\n\t\t\t\\hline\n\t\t\n Processor & \\multicolumn{4}{c}{\\makecell{4-core OoO x86 (8 cores for mix0), 4GHz, Fetch\/Issue width (8), \\\\ LSQ (64), ROB (224)}} \\tabularnewline\n\t\t\t\\hline \n NDA & \\multicolumn{4}{c}{\\makecell{one PE per chip, 1.2GHz, fully pipelined, write buffer (128) (Section {\\ref{sec:implementation}})}} \\tabularnewline\n\t\t\t\\hline \n\t\t\tTLB & \\multicolumn{4}{c}{I-TLB:64, D-TLB:64, Associativity (4)} \\tabularnewline\n\t\t\t\\hline \n\t\t\tL1 & \\multicolumn{4}{c}{\\makecell{32KB, Associativity (L1I: 8, L1D: 8), LRU, 12 MSHRs}} \\tabularnewline\n\t\t\t\\hline \n\t\t\tL2 & \\multicolumn{4}{c}{\\makecell{256KB, Associativity (4), LRU, 12 MSHRs}} \\tabularnewline\n\t\t\t\\hline\n\t\t\tLLC & \\multicolumn{4}{c}{\\makecell{8MB, Associativity (16), LRU, 48 MSHRs, Stride prefetcher}} \\tabularnewline\n\t\t\t\\hline \n\t\t\tDRAM & \\multicolumn{4}{c}{\\makecell{DDR4, 1.2GHz, 8Gb, x8, 2channels $\\times$ 2ranks, \\\\\n\t\t\tFR-FCFS, 32-entry RD\/WR queue, Open policy, \\\\\n\t\t\tIntel Skylake address mapping \\cite{pessl2016drama}}} \\tabularnewline\n\t\t\t\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{DRAM timing parameters} \\tabularnewline\n\t\t\t\\hline\n\n\t\t\t\\multicolumn{5}{c}{\\makecell{tBL=4, tCCDS=4, tCCDL=6, tRTRS=2, tCL=16, tRCD=16,\\\\\n\t\t\ttRP=16, tCWL=12, tRAS=39, tRC=55, tRTP=9, tWTRS=3,\\\\\n tWTRL=9, tWR=18, tRRDS=4, tRRDL=6, tFAW=26}} \\tabularnewline\n\t\t\t\\hline \n\n\t\t\t\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{Energy Components} \\tabularnewline\n\t\t\t\\hline\n\n\t\t\t\\multicolumn{5}{c}{\\makecell{Activate energy: 1.0nJ, PE read\/write energy: 11.3pJ\/b, \\\\\n\t\t\thost read\/write energy: 25.7pJ\/b, PE FMA: 20pJ\/operation, \\\\\n\t\t\tPE buffer dynamic: 20pJ\/access, PE buffer leakage power: 11mW \\\\\n\t\t\t(Energy\/power of scratchpad memory is same as PE buffer)}} \\tabularnewline\n\t\t\t\\hline \n\t\t\t\n\t\t\t\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{4}{c|}{Benchmarks} & MPKI\\tabularnewline\n\t\t\t\\hline\n mix0 & \\multicolumn{3}{c|}{\\makecell{mcf\\_r:lbm\\_r:omnetpp\\_r:gemsFDTD\\\\\n bwaves:milc:soplex:leslie3d}} & \\makecell{H:H:H:H\\\\\n H:M:M:M}\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix1 & \\multicolumn{3}{c|}{mcf\\_r:lbm\\_r:omnetpp\\_r:gemsFDTD} & H:H:H:H\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix2 & \\multicolumn{3}{c|}{mcf\\_r:lbm\\_r:gemsFDTD:soplex} & H:H:H:H\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix3 & \\multicolumn{3}{c|}{lbm\\_r:omnetpp\\_r:gemsFDTD:soplex} & H:H:H:H\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix4 & \\multicolumn{3}{c|}{omnetpp\\_r:gemsFDTD:soplex:milc} & H:H:H:M\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix5 & \\multicolumn{3}{c|}{gemsFDTD:soplex:milc:bwaves\\_r} & H:H:M:M\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix6 & \\multicolumn{3}{c|}{soplex:milc:bwaves\\_r:leslie3d} & H:M:M:M\\tabularnewline\n\t\t\t\\hline \n\t\t\tmix7 & \\multicolumn{3}{c|}{milc:bwaves\\_r:astar:cactusBSSN\\_r} & M:M:M:M\\tabularnewline\n\t\t\t\\hline \n mix8 & \\multicolumn{3}{c|}{leslie3d:leela\\_r:deepsjeng\\_r:xchange2\\_r} & M:L:L:L\\tabularnewline\n\t\t\t\\hline\n\n \\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{NDA Kernels} \\tabularnewline\n\t\t\t\\hline\n\n \\multicolumn{5}{c}{\\makecell{NDA basic operations (Table {\\ref{tab:nda_ops}}), SVRG (details below), \\\\\n CG (16K ${\\times}$ 16K), and SC (2M ${\\times}$ 128)}} \\tabularnewline\n\t\t\t\\hline\n\n\t\t\t\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{white}\\hline\n\t\t\t\\arrayrulecolor{black}\\hline\n\t\t\t\\rowfont{\\normalsize}\n \\multicolumn{5}{c}{Machine Learning Configurations} \\tabularnewline\n\t\t\t\\hline\n\n\t\t\t\\multicolumn{5}{c}{\\makecell{Logistic regression with ${\\ell2}$-regularization (10-class classification), ${\\lambda}$=1e-3,\\\\\n\t\t\tlearning rate=best-tuned, momentum=0.9, dataset=cifar10 (50000 ${\\times}$ 3072)}} \\tabularnewline\n\t\t\t\\hline \n\t\t\\end{tabu}\n\t}\n\t\\caption{Evaluation parameters.}\n\t\\label{tab:eval_config} \n\t\\vspace*{-5mm}\n\\end{table}\n\n\n\n\n\\section{Runtime and API}\n\\label{sec:implementation}\n\nChopim is general and helps whenever host\/NDA concurrent access is needed. To make the explanations and evaluation concrete, we use an exemplary \\meadd{interface} design as discussed below and summarized in \\fig{fig:impl_overview}. Command and address signals pass through the NDA memory controllers so that they can track host rank state. Processing elements (PEs) in the logic die access data by using their local NDA memory controller (\\fig{fig:baseline_nda}).\n\\bcut{We propose a similar API as other C++ math libraries~\\cite{sanderson2010armadillo,jacob2013eigen,iglberger2012high} for the example use case of accelerating linear algebra operations.} \\fig{fig:impl_avg_grad_example_code} shows example usage of our API for computing the average gradient used in the summarization task of SVRG.\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.46\\textwidth]{fig\/impl_overview.pdf}\n\t\\caption{Overview of NDA architecture.}\n\t\\label{fig:impl_overview}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\nThe Chopim runtime system manages memory allocations and launches NDA operations. NDA operations are blocking by default, but can also execute asynchronously. If the programmer calls an NDA operation with operands from different shared regions (colors), the runtime system inserts appropriate data copies. We envision a just-in-time compiler that can identify such cases and more intelligently allocate memory and regions to minimize copies. For this paper, we do not implement such a compiler. Instead, programs are written to directly interact with a runtime system that is implemented within the simulator.\n\nNDAs operate directly on DRAM addresses and do not perform address translation. To launch an operation, the runtime (with help from the OS) translates the origin of each operand into a physical address, which is then communicated \\meadd{along with a bound} to the NDAs by the NDA controller. The runtime is responsible for splitting a single API call into multiple primitive NDA operations. The NDA operations themselves proceed through each operand with a regular access pattern implemented as microcode in the hardware\\meadd{, which also checks the bound for protection}. \\bcut{DRAM addresses are computed by following the same physical-to-DRAM mapping function used by the host memory controller (\\sect{subsec:impl_data_layout}).} \n\n\n\\medskip\n\\noindent\\textbf{\\textit{Optimization for Load-Imbalance.}}\nLoad imbalance occurs when the host does not access ranks uniformly over short periods of time. The AXPY operation (launched repeatedly within the loop shown in \\fig{fig:impl_avg_grad_example_code}) is short and non-uniform access by the host leads to load imbalance among NDAs. A blocking operation waits for \\emph{all} NDAs to complete before launching the next AXPY, which reduces performance. \nOur API provides asynchronous launches similar to CUDA streams \\ykadd{or OpenMP parallel \\texttt{for} with a \\texttt{nowait} clause \\cite{dagum1998openmp}}. Asynchronous launches can overlap AXPY operations from multiple loop iterations. Any load imbalance is then only apparent when the loop ends. Over such a long time period, load imbalance is much less likely. We implement asynchronous launches using \\emph{macro NDA operation}. An example of a macro operation is shown in the loop of \\fig{fig:impl_avg_grad_example_code} and is indicated by the \\texttt{parallel\\_for} annotation. \n\n\\bcut{\n\\medskip\n\\noindent\\textbf{\\textit{Exploiting Inter-Iteration Locality.}}\nEach NDA PE includes a small 1 KB scratchpad memory (sized equal to a row buffer within the DRAM chip). The runtime leverages this to reorder operations within macro NDA operations. In the AXPY macro operation example, inter-iteration locality exists for vector ${\\vec{a}}$. If ${\\vec{a}}$ does not fit in the scratchpad memory, matrix ${X}$ is decomposed in the column direction and operations are launched for one column group after another. The locality captured by the scratchpad eliminates writing intermediate results back into DRAM. This also reduces write interference (write-to-read and read-to-write turnaround times).\n}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.38\\textwidth]{fig\/avg_grad_example_code.pdf}\n\t\\caption{Average gradient example code. This code corresponds to \\textit{summarization} in SVRG (see Section \\ref{sec:collaboration}).}\n\t\\label{fig:impl_avg_grad_example_code}\n\n\\end{figure}\n\n\n\\medskip\n\\noindent\\textbf{\\textit{Launching NDA Operations.}}\n\\label{subsec:impl_launch}\nNDA operations are launched similarly to Farmahini et al.~\\cite{farmahini2015nda}. A memory region is reserved for accessing control registers of NDAs. NDA packets access the control registers and launch operations. Each packet is composed of the type of operation, the base addresses of operands, the size of data blocks, and scalar values required for scalar-vector operations. On the host side, the \\textit{NDA controller} plays two main roles. First, it accepts acceleration requests, issues commands to the NDAs in the different ranks (in a round-robin manner), and notifies software when a request completes. Second, it extends the host memory controller to coordinate actions between the NDAs and host memory controllers and enables concurrent access. It maintains the replicated FSMs using its knowledge of issued NDA operations and the status of the host memory controller. \\bcut{The NDA controller is also responsible for throttling specific NDAs if necessary to maintain host performance.}\n\n\\medskip\n\\noindent\\textbf{\\textit{Execution Flow of a Processing Element.}}\n\\label{subsec:impl_pe}\n\n\\begin{figure}[t!]\n\\centering\n\t\\includegraphics[width=0.42\\textwidth]{fig\/ndp_axpy.pdf}\n\t\\caption{PE architecture and execution flow of AXPY.}\n\t\\label{fig:eflow_axpy}\n\t\\vspace*{-4mm}\n\\end{figure}\n\nOur exemplary PE is composed of two floating-point fused multiply-add (FPFMA) units, 5 scalar registers (up to 3 operand inputs and 2 for temporary values), a 1KB buffer for accessing memory, and the 1KB scratchpad memory. The memory access granularity is 8B per chip and the performance of the two FPFMAs per chip matches this data access rate. PEs may be further optimized to support lower-precision operations or specialized for specific use cases, but we do not explore these in this paper as we focus on the new capabilities of Chopim rather than NDA in general.\n\n\\fig{fig:eflow_axpy} shows the execution flow of a PE when executing the AXPY operation. Each vector is partitioned into 1KB batches, which is the same size as DRAM page size per chip. To maximize bandwidth utilization, the vector ${X}$ is streamed into the buffer. Then, the PE opens another row, reads two elements (8 bytes) of vector ${Y}$, and stores them to FP registers. While the next two elements of ${Y}$ are read, a fused multiply-add (FMA) operation is executed. The result is stored back into the buffer and execution continues such that the read-execute-write operations are pipelined. After the result buffer is filled, the PE either writes results back to memory or to the scratchpad. This flow for one 1KB batch is repeated over the rest of the batches. This entire process is stored in PE microcode as the AXPY operation. \\meadd{Other operations (coarse or fine grained) are similarly stored and processed from microcode.}\n\n\\bcut{\nNote that, if we only have one NDA bank, changing the access order of two input vectors degrades the performance of AXPY. This is because if vector ${Y}$ is read first and vector ${X}$ next, the row for vector ${Y}$ is closed before it is updated, whereas the reverse order will guarantee the row remains opened. Also, one optimization is to close a row right after accessing the last column of the row when the row is no longer being used. In AXPY, we always apply this optimization for ${X}$ whereas only apply for ${Y}$ after writing is done.\n\nOther NDA operations (\\tab{tab:nda_ops}) follow a similar execution flow. NRM2 is a dot-product of one vector and itself. Therefore, the input to the PE should be written to the buffer and to registers at the same time. NRM2 and DOT require reductions at the end since two FPFMAs operate separately on their own accumulators; the reductions are performed by the runtime system on the host. The input of the SCAL operation is stored directly into the register and the results are written to the buffer.\n} \n\n\\medskip\n\\noindent\\textbf{\\textit{Inter-PE Communication.}}\n\\meadd{NDAs are only effective when they use memory-side bandwidth to amplify that of the host. In the DIMM- and chip-based NDAs, which we target in this paper, general inter-PE communication is therefore equivalent to communicating with the host. Communication in applications that match this NDA architecture are primarily needed for replicating data to localize operands or for global reduction operations, which follow local per-PE reductions.} \n\\medel{There are two types of communication in our case study: data replication and reduction.} In both communication cases, a global view of data layout is needed and, therefore, we enable communication only through the host. For instance, after the macro operation in \\fig{fig:impl_avg_grad_example_code}, \\meadd{a global reduction of the PE private copies (\\texttt{a\\_pvt}) accumulates the data for the final result (\\texttt{a}). The reduced result is used by the following NDA operation, requiring replication communication for its data layout has to meet NDA locality requirements with the other NDA operand (\\texttt{w}). Though communicating through the host is expensive, our coarse-grained NDA operations amortize infrequent communication overhead. Importantly}, since this communication can be done as normal DRAM accesses by the host, no change on the memory interface is required.\n\n\n\\section{Related Work}\n\\label{sec:related_work}\n\nTo the best of our knowledge, this is the first work that proposes solutions for near data acceleration while enabling the concurrent host and NDA access without data reorganization and in a non-packetized DRAM context. \\ykadd{Packetized DRAM, while scalable, may suffer from 2--4x latency longer than DDR-based protocol even under very low or no load \\cite{hadidi2018performance}. } To solve this unique problem, many previous studies have influenced our work. \n\n\nThe study of near data acceleration has been conducted in a wide range as the relative cost of data access becomes more and more expensive compared to the computation itself. The nearest place for computation is in DRAM cells \\cite{seshadri2017ambit,li2017drisa,seshadri2015fast} or the crossbar cells with emerging technologies \\cite{li2016pinatubo,chi2016prime,shafiee2016isaac,song2017pipelayer,song2018graphr,sun2017energy,chen2018regan,long2018reram}. Since the benefit of near-data acceleration comes from high bandwidth and low data transfer energy, the benefit becomes larger as computation move closer to memory. However, area and power constraints are significant, restricting adding complex logic. As a result, workloads with simple ALU operations are the main target of these studies. \n\n3D stacked memory devices enable more complex logic on the logic die and still exploit high internal memory bandwidth. Many recent studies are conducted based on this device to accelerate diverse applications \\cite{gao2017tetris,kim2016neurocube,drumond2017mondrian,ahn2016scalable,ahn2015pim,guo20143d,hsieh2016transparent,hsieh2016accelerating,liu2017concurrent,pattnaik2016scheduling,zhang2014top,gao2015practical,nair2015active,hong2016accelerating,boroumand2016lazypim,liu2018processing,boroumand2019conda}. However, in these proposals, the main memory role of the memory devices has gained less attention compared to the acceleration part. Some prior work \\cite{akin2015hamlet,sura2015data,akin2016data,boroumand2018google} attempts to support the host and NDA access to the same data but only with data reorganization and in a packetized DRAM context. Parrnaik et al. \\cite{pattnaik2016scheduling} show the potential of concurrently running both the host and NDAs on the same memory. However, they assume an idealized memory system in which there is no contention between NDA and host memory requests. We do not assume this ideal case. The main contributions of Chopim are precisely to provide mechanisms for mitigating interference.\n\nOn the other hand, \\textit{NDA} \\cite{farmahini2015nda}, Chameleon \\cite{asghari2016chameleon}, and MCN DIMM \\cite{alian2018nmp} are based on conventional DIMM devices and changes the DRAM design to practically add PEs.\\bcut{\\textit{NDA} finds the places to add TSVs from commodity DDR3 devices and solves data layout problem by shuffling. It also proposes solutions to switch mode between host and NDA (precharge-all method) and to avoid concurrent host access (rank partitioning). Chameleon finds an opportunity for near-data acceleration in Load-Reduced DIMM and places PEs in data buffer chips. To overcome the command bandwidth bottleneck, they split DQs and use a part of them for transferring memory commands for PEs. MCN DIMM realized DIMM-type NDAs by enabling the host and MCN processors to communicate with network protocol but via DDR interface. Each MCN DIMM runs a light-weight OS and acts as a small independent computing node. Based on this prior work, we focus more on host-NDA concurrent access.} Unlike rank partitioning and coarse-grain mode switching used in the prior work, we let host and PEs share ranks to maximize parallelism and partition banks to decrease contention. \n\n\n\n\n\n\n\n\n\\section{Chopim}\n\\label{sec:chonda}\n\nWe develop Chopim with four main connected goals that push the state of the art: (1) enable fine-grain interleaving of host and NDA memory requests to the same physical memory devices while mitigating the impact of their contention; (2) permit the use of coarse-grain NDA operations that process long vector instructions\/kernels; (3) simultaneously support the locality needed for NDAs and the sophisticated memory address interleaving required for high host performance; and (4) integrate with both a packetized interface and a traditional host-controlled DDRx interface.\nWe detail our solutions in this section after summarizing the need for a new approach.\n\n\\hpcacut{\nTwo Our approach on managing concurrent access on different data is for NDAs to opportunistically utilize even a brief moment that the host does not access memory. To leverage these short idle periods, overhead for memory ownership switching should be minimized. In this way, the internal memory bandwidth can be fully utilized yet gives negligible impact on the host performance. In this section, we present our mechanisms that enable fine-grain interleaving between host and NDA accesses. Note that we can allocate more time and allow an exclusive access for NDA executions based on certain policy but we do not explore this in this paper. \n\nIn addition, our approach on concurrent access to the single copy of shared data is to use memory controller's address mapping as is while localizing NDA operands at memory allocation and execution time. As no data reorganization is required between host and NDA access phases, this data layout enables concurrent and collaborative processing between the host and NDAs on the shared data. \n}\n\n\\medskip\n\\noindent\\textbf{\\textit{The need for fine-grain access interleaving with opportunistic NDA issue.}}\n\\label{subsec:nda_issue}\nAn ideal NDA opportunistically issues NDA memory requests whenever a rank is idle from the perspective of the host. This is simple to do in a packetized interface where a memory-side controller schedules all accesses, but is a challenge in a traditional memory interface because the host- and NDA-side controllers must be synchronized. Prior work proposed dedicating some ranks to NDAs and some to the host or coarse-grain temporal interleaving~\\cite{farmahini2015nda,asghari2016chameleon}. The former approach contradicts one of our goals as devices are not shared. The latter results in large performance overhead because it cannot effectively utilize periods where a rank is naturally idle due to the host access pattern. \\fig{fig:motiv_rank_idle} shows that for a range of multi-core application mixes (methodology in \\sect{sec:method}), the majority of idle periods are shorter than 100 cycles with the vast majority under 250 cycles. \\emph{Fine-grain access interleaving is therefore necessary. }\n\\vspace*{-2mm}\n\n\\begin{figure}[t!bh]\n\\centering\n\t\\includegraphics[width=0.48\\textwidth]{fig\/motiv_rank_idle.pdf}\n\t\\vspace*{-2mm}\n\t\\caption{Rank idle-time breakdown vs. idleness granularity.}\n\t\\label{fig:motiv_rank_idle}\n\t\\vspace*{-2mm}\n\\end{figure}\n\n\n\\medskip\n\\noindent\\textbf{\\textit{The need for coarse-grain NDA vector\/kernel operations.}}\n\\label{subsec:launch_ovhd}\nFine-grain access interleaving is simple if each NDA command only addresses a single cache block region of memory. Such fine-grain NDA operations have indeed been discussed in prior work~\\cite{ahn2015pim,ahn2016scalable,bssync,nai2017graphpim}. One overhead of this fine-grain approach is that of issuing numerous NDA commands, with each requiring a full memory transaction that occupies both the command and data channels to memory. Issuing NDA commands too frequently degrades host performance, while infrequent issue underutilizes the NDAs.\nCoarse-grain NDA vector operations that operate on multiple cache blocks mitigate contention on the channel and improve overall performance. The vector width, ${N}$, is specified for each NDA instruction. As long as the operands are contiguous in the DRAM address space, one NDA instruction can process numerous data elements without occupying the channel. Coarse-grain NDA operations are therefore desirable, but \\emph{introduce the data layout, memory contention, and host--NDA synchronization challenges which Chopim solves}.\n\n\n\\subsection{Localizing NDA Operands while Distributing Host Accesses}\n\\label{subsec:data_layout}\nTo execute the N-way NDA vector instructions, all the operands of each NDA instruction must be fully contained in a single rank \\meadd{(single PE)}. If necessary, data is first copied from other ranks prior to launching an NDA instruction. If the reuse rate of the copied data is low, this copying overhead will dominate the NDA execution time and contention on the memory channel will increase due to the copy commands. \n\n\\emph{We solve this problem} in Chopim by laying out data such that all the operands are localized to each NDA at memory allocation time. Thus, copies are not necessary. This is challenging, however, because the host memory controller uses complex address interleaving functions to maximally exploit channel, rank, and bank parallelism for arbitrary host access patterns. Hence, arrays that are contiguous in the host physical address space are not contiguous in physical memory and are shuffled across ranks.\\medel{, possibly in a physical-address dependent manner.} \nThis challenge is illustrated in the left side of \\fig{fig:rank_layout}, where two operands of an NDA instructions are shuffled differently across ranks and banks. The layout resulting from our approach is shown at the right of the figure, where arrays (operands) are still shuffled, but both operands follow the same pattern and remain correctly aligned to NDAs without copy operations. Note that alignment is to rank because that corresponds to an NDA partition. \n\n\\medskip\n\\noindent\\textbf{\\textit{Data layout across ranks.}} \n\nWe rely on the NDA runtime and OS to use a combination of coarse-grain memory allocation and coloring to ensure all operands of an NDA instruction are interleaved across ranks the same way \\meadd{and are thus local to a PE}. First, the runtime allocate memory for NDA operands such that they are aligned at the granularity of one DRAM row for each bank in the system which we call a \\textit{system row} (e.g., 2MiB for a DDR4 1TiB system). For all the address interleaving mechanisms we are aware of~(\\cite{pessl2016drama,liu2018get}), this ensures that NDA operands are locally aligned, as long as ranks are also kept aligned. To maintain rank alignment, we reply on OS page coloring to effect rank alignment. We explain this feature below using the Intel Skylake address mapping~\\cite{pessl2016drama} as a concrete and representative interleaving mapping (\\fig{fig:baseline_addr_map}).\n\nIn this mapping, rank and channel addresses are determined partly by the low-order bits that fall into the frame offset field and partly by the high-order bits that fall into the physical frame number (PFN) field. Frame offsets are kept the same because of the coarse-grain alignment. The OS colors allocations such that the PFN bits that determine rank and channel are aligned for a particular color; which physical address bits select ranks and channels can be reverse engineered if necessary~\\cite{pessl2016drama}. The Chopim runtime indicates a \\textit{shared color} when it requests memory from the OS and specifies the same color for all operands of an instruction. The runtime can use the same color for many operands to minimize copies needed for alignment. In our baseline system, there are 8 colors and each color corresponds to a shared region of memory of $4$GiB. Multiple regions can be allocated for the same process. Though we focus on one address mapping here, our approach works with any linear address mapping described in prior work~\\cite{pessl2016drama,liu2018get} as well.\n\nNote that coarse-grain allocation is simple with the common buddy allocator if allocation granularity is also a system row, and can use optimizations that already exist for huge pages~\\cite{yun2014palloc,kwon2016coordinated,gorman2004understanding}. The fragmentation overheads of coarse allocation are similar to those with huge pages and we find that they are negligible because coarse-grain NDA execution works best when processing long vectors. \n\n\n\\medskip\n\\noindent\\textbf{\\textit{Data layout across DRAM chips.}}\nIn the baseline system, each 4-byte word is striped across multiple chips, whereas in our approach each word is located in a single chip so that NDAs can access words from their local memory. Both the host and NDAs can access memory without copying or reformatting data (as required by prior work~\\cite{farmahini2015nda}). Memory blocks still align with cache lines, so this layout change is not visible to software. \\bcut{This layout precludes the critical word first optimization from DRAM, but recent work concludes the impact is minimal because the relative latency difference in current memory systems is very small (e.g.,~\\cite{yoon2012dgms}).} Note that this data layout does not impact the host memory controller's ECC computation (e.g. Chip-kill~\\cite{dell1997white}) because ECC protects only bits and not how they are interpreted. For NDA accesses, we rely on in-DRAM ECC with its limited coverage. We do not innovate in this respect and leave this problem for future work.\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.52\\textwidth]{fig\/rank_layout.pdf}\n\t\\caption{Example data layout across ranks for concurrent access of the COPY operation (B[i] = A[i]). With naive data layout (left), elements with the same index are located in different ranks. With our proposed mechanism (right), elements with the same index are co-located. NDAs access contiguous columns starting from the base of each vector.}\n\t\\label{fig:rank_layout}\n\\end{figure}\n\n\\begin{figure}[t!]\n\\centering\n \\begin{minipage}[t]{0.4\\textwidth}\n\t\t\t\\subfloat [Baseline (Skylake~\\cite{pessl2016drama})] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{fig\/baseline_addr_map.pdf}\n\t\t\t\t\\label{fig:baseline_addr_map}\n\t\t\t} \\\\\n\t\t\t\\subfloat [Proposed (for bank partitioning)] {\n\t\t\t\t\\includegraphics[width=\\textwidth]{fig\/hashing_addr_map.pdf}\n\t\t\t\t\\label{fig:hashing_addr_map}\n\t\t\t}\n\t\t\\end{minipage}\n\t\\caption{Baseline and proposed host-side address mapping.}\n\t\\label{fig:addr_map}\n\t\\vspace*{-4mm}\n\\end{figure}\n\n\n\\subsection{Mitigating Frequent Read\/Write Penalties}\n\\label{subsec:block_nda_write}\n\nThe basic memory access scheduling policy we use for Chopim is to always prioritize host memory requests, yet aggressively leverage unutilized rank bandwidth by issuing NDA requests whenever possible. That is, NDAs wait when incoming host requests are detected, but otherwise always issue their memory requests to maximize their bandwidth utilization and performance.\nOne potential problem is that an NDA request issued in one cycle may delay a host request that could have issued in one of the following cycles otherwise.\n\nWe find that NDAs infrequently issue row commands (ACT and PRE). We therefore prioritize host memory commands over any NDA row command to the same bank. This has negligible impact on NDA performance in our experiments.\n\nWe also find that read transactions of NDAs have only a small impact on following host commands. NDA write transactions, however, can have a large impact on host performance because of the read\/write-turnaround penalties that they frequently require. While the host mitigates turnaround overhead by buffering operations with caches and write buffers~\\cite{stuecheli2010virtual,ahn2006design}, the host and NDAs may interleave different types of transactions when accessing memory in parallel. We find that NDA writes interleaved with host reads degrade performance the most. \\emph{As a solution,} we introduce two mechanisms to selectively throttle NDA writes. \n\n\nOur first mechanism throttles the rate of NDA writes by issuing them with a predefined probability. We call this mechanism \\textit{stochastic NDA issue}. Before issuing a write transaction, the NDAs both detect if a rank is idle and flip a coin to determine whether to issue the write. By adjusting the coin weight, the performance of the host and NDAs can be traded off: higher write-issue probability leads to more frequent turnarounds while a lower probability throttles NDA progress. Deciding how much to throttle NDAs requires analysis or profiling, and we therefore propose a second approach as well. \n\nOur second approach does not require tuning, and we empirically find that it works well. In this \\emph{next rank prediction} approach, the memory controller inhibits NDA write requests when more host read requests are expected; the controller stalls the NDA in lieu of providing an NDA write queue. In a packetized interface, the memory controller schedules both host and NDA requests and is thus aware of potential required turnarounds. The traditional memory interface, however, is more challenging as the host controller must explicitly signal the NDA controller to inhibit its write request. This signal must be sent ahead of the regular host transaction because of bus delays.\n\nWe use a very simple predictor that inhibits NDA write requests in a particular rank when the oldest outstanding host memory request to that channel is a read to that same rank. Specifically, the NDA controller examines the target rank of the oldest request in the host memory controller transaction queue. Then, it signals to the NDAs in that rank to stall their writes. For now, we assume that this information is communicated over a dedicated pin and plan to develop other signaling mechanisms that can piggyback on existing host DRAM commands at a later time. Our experiments with an FRFCFS~\\cite{frfcfs} memory scheduler at the host show that this simple predictor works well and achieves performance that is comparable to a tuned stochastic issue approach.\n\n\n\n\n\\subsection{Partitioning into Host and Shared Banks}\n\\label{subsec:impl_bpart}\n\nIn addition to read\/write-turnaround overheads, concurrent access also degrades performance by decreasing DRAM row access locality. When the host and NDAs interleave accesses to different rows of the same bank, frequent bank conflicts occur. To avoid this bank contention, we propose using bank partitioning to limit bank interference to only those memory regions that must concurrently share data between the NDAs and the host. This is particularly useful in colocation scenarios when only a small subset of host tasks utilize the NDAs. However, existing bank partitioning mechanisms~\\cite{mi2010bankpark,jeong2012balancing,liu2012software} are incompatible with both huge pages and with sophisticated DRAM address interleaving schemes.\n\nBank partitioning relies on the OS to color pages where colors can be assigned to different cores or threads, or in our case, for banks isolated for the host and those that could be shared. The OS then maps pages of different color to frames that map to different banks. \n\\fig{fig:baseline_addr_map} shows an example of a modern physical address to DRAM address mapping \\cite{pessl2016drama}. One color bit in the baseline mapping belongs to the page offset field so prior bank partitioning schemes can, at best, be done at two-bank granularity. More importantly, when huge pages are used (e.g., 2MiB), this baseline mapping cannot be used to partition banks at all. \n\nTo overcome this limitation, we propose a new interface that partitions banks into two groups---host-reserved and shared banks---with flexible DRAM address mapping and any page size. Specifically, our mechanism only requires that the most significant physical address bits are only used to determine DRAM row address, as is common in recent hash mapping functions, as shown in \\fig{fig:hashing_addr_map} \\cite{pessl2016drama}.\n\nWithout loss of generality, assume 2 banks out of 16 banks are reserved for the shared data. First, the OS splits the physical address space for host-only and shared memory region with the host-only region occupying the bottom of the address space: $0-\\left(14\\times\\mathit{(bank\\_capacity)}-1\\right)$. The rest of the space (with the capacity of 2 banks) is reserved for the shared data and the OS does not use it for other purposes. This guarantees that the most significant bits (MSBs) of the address of host-only region are never b'111. In contrast, addresses in the shared space always have b'111 in their MSBs. \n\nThe OS informs the memory controller that it reserved 2 banks (the top-most banks) for shared memory region. Host-only memory addresses are mapped to DRAM locations using any hardware mapping function, which is not exposed to software and the OS. The idea is then to remap addresses that initially fall into shared banks into the reserved address space that the host is not using. Additional simple logic checks whether the resulting DRAM address bank ID of the initial mapping is a reserved bank for shared region. If they are not, the DRAM address is used as is. If the DRAM address is initially mapped to one of the reserved banks, the MSBs and the bank bits are swapped. Because the MSBs of a host address are never b'1110 or b'1111, the final bank ID will be one of the host-only bank IDs. Also, because the bank ID of the initial mapping result is 14 or 15, the final address is in a row the host cannot access with the initial mapping and there is no aliasing. Note that the partitioning decision can be adjusted, but only if all affected memory is first cleared. \n\n\\subsection{Tracking Global Memory Controller State}\n\\label{subsec:track_gstate}\n\nUnlike conventional systems, Chopim also enables an architecture that has two memory controllers (MCs) managing the bank and timing state of each rank. This is the case when the host continues to directly manage memory even when the memory itself is enhanced with NDAs. This requires coordinating rank state information between controllers. \\fig{fig:repl_fsm} shows how MCs on both sides of a memory channel track global memory controller state. Information about host transactions is easily obtained by the NDA MCs as they can monitor incoming transactions and update the state tables accordingly (left). However, the host MC cannot track all NDA transactions due to command bandwidth limits.\n\nTo solve this problem, we replicate the finite-state machines (FSMs) of NDAs and place them in the host-side NDA controller. When an NDA instruction is launched, the FSMs on both sides are synchronized. We rely on the already-synchronized DDR interface clock for FSM synchronization. Whenever an NDA memory transaction is issued, the host-side FSM also updates the state table in the host MC without communicating with the NDAs (right). If a host transaction blocks NDA transactions in one of the ranks, that transaction will be visible to both FSMs. Replicated FSMs track the NDA write buffer occupancy and detect when the write-buffer draining starts and ends to trigger write throttling. The area and power overhead of replicating FSMs are negligible (40-byte microcode store and 20-byte state registers per rank (i.e., per NDA)).\n\\meadd{\\emph{Our evaluation uses this approach to enable a DDR4-based NDA-enabled main memory and all our experiments rely on this.}}\n\n\n\\begin{figure}[t!]\n\\centering\n \\includegraphics[width=0.42\\textwidth]{fig\/repl_fsm.pdf}\n\t\\caption{Global MC state tracking when the host (left) and NDAs (right) issues memory commands. The replicated FSMs are synchronized by using the DDR interface clock.}\n\t\\label{fig:repl_fsm}\n\t\\vspace*{-2mm}\n\\end{figure}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nDespite over 100 years of intense experimental and theoretical efforts, the origin of Galactic cosmic rays (GCRs) has still not been unambiguously identified. At energies above a few tens of GeV, much progress has been made in the last couple of years, thanks to direct observations by high-precision, high-statistics experiments like AMS-02 or PAMELA and the study of gamma-rays by \\textit{Fermi}-LAT and Cherenkov telescopes~\\cite{gabici2019}. At lower energies, however, the situation is still very much unclear. Until recently, solar modulation, that is the suppression of intensities due to interactions with the magnetised solar wind, hampered the study of GCRs at energies around a GeV and below~\\cite{potgieter2013}. Modelling of the transport of these particles therefore essentially relied on extrapolations from higher energies.\n\nIn 2013, however, the first direct observations of interstellar spectra by Voyager~1 were published and it became clear that simple extrapolations from higher energies fail~\\cite{stone2013}. Specifically, in order to fit both Voyager~1 and \\mbox{AMS-02} data, simple diffusive transport models overpredict the intensities at Voyager energies (e.g.~\\cite{Vittino:2019yme}). While phenomenological models can add a break in the source spectra around a GeV in an \\emph{ad hoc} fashion, the physical interpretation of such a break is rather questionable~\\cite{cummings2016,orlando2018,boschini2018a,boschini2018b,johannesson2018,bisschoff2019}. In fact, we would maintain that no convincing explanation of such a break has been put forward to date.\n\nThis issue is far from academic since the energy range affected is important for a number of issues. \nIn fact, most of the energy density of GCRs is contributed in the energy range around a GeV and, depending on the spectrum, possibly below.\nCorrespondingly, different spectra imply different power requirements for the sources, which provide helpful clues on the nature of GCR acceleration \\cite{ginzburg1964,recchia2019}. Moreover, GCRs are the prime agent of ionisation in dense molecular clouds (MCs) and recently, the ionisation rates inferred from nearby MCs have been shown to be in strong tensions with the local interstellar spectra as measured by Voyager~1 \\cite{phan2018,silsbee2019,padovani2020}. Furthermore, diffuse emission in radio waves and MeV gamma-rays is sensitive to this energy range (e.g.~\\cite{orlando2018}). The diffuse radio background constitutes the dominant foreground for upcoming cosmological studies of the epoch of reionisation (e.g. \\cite{Rao:2016xre}) and diffuse gamma-rays for proposed MeV missions (eAstrogam~\\cite{DeAngelis:2016slk}, AMEGO~\\cite{2019BAAS...51g.245M}). Lastly, the current picture of GCRs is simply incomplete if one cannot explain cosmic rays at MeV energies.\n\nAn important effect for MeV GCRs that has been ignored in the literature is due to the discrete nature of sources. Instead, the distribution of sources in position and time is oftentimes modelled as smooth. That is, the predicted cosmic ray density $\\psi$ is the solution of the transport equation with a source term $q$ that is a smooth function of position ($r$ and $z$), energy $E$ and time $t$,\n\\begin{equation}\n\\frac{\\partial \\psi}{\\partial t}+\\frac{\\partial}{\\partial z}\\left(u \\psi \\right) -D\\nabla^2 \\psi + \\frac{\\partial }{\\partial E}\\left(\\dot{E}\\psi\\right)=q(r, z, E, t) \\, . \\label{eq:transport}\n\\end{equation}\nHere, $u=u(z)$ is the advection velocity profile with only the component perpendicular to the Galactic disk, $D=D(E)$ is the isotropic and homogeneous diffusion coefficient, $\\dot{E}$ describes the energy loss rate for GCRs both inside the Galactic disk and in the magnetized halo. Note that it might be more customary to formulate Eq. 1 in terms of momentum (see \\footnote{See Supplemental Material at \\url{http:\/\/link.aps.org\/supplemental\/} for some discussions with supporting figures and tables, which includes Refs. \\cite{strong1998,schlickeiser1999,mertsch2020}.} for the transformation to kinetic energy).\n\nEven though the sources are likely separate, discrete objects like supernova remnants (SNRs), the approximation of a smooth source density is admissible at GeV energies, since the transport distances and times exceed the typical source separations and ages. However, if energy losses reduce the propagation times and distances, this approximation breaks down and instead the discrete nature of the sources needs to be taken into account. This can be done by replacing the smooth source density from before by a sum of individual delta-functions in distance and age,\n\\begin{equation}\nq(r, z, E, t) = \\sum_{i=1}^{N_\\text{s}} Q(E)\\frac{\\delta(r - r_i)}{2\\pi r_i}\\delta(z-z_i)\\delta(t - t_i) \\, .\n\\end{equation}\n$Q(E)$ denotes the spectrum that an individual source injects into the ISM. The total intensity from $N_{\\text{s}}$ sources is then just the sum over the Green's function $\\mathcal{G}(r, z, E; r_i, z_i, t-t_i)$ of Eq.~\\eqref{eq:transport} at the position of the solar system,\n\\begin{equation}\n\\psi = \\sum_i \\mathcal{G}(r=0, z=z_\\odot, E; r_i, z_i, t-t_i) \\, . \\label{eq:stochasticity}\n\\end{equation}\nwhere $z=z_{\\odot}\\simeq 14$ pc is the vertical offset of the solar system from the Galactic mid-plane \\cite{skowron2019}. An example where this approach has been followed are high-energy electrons and positrons at hundreds of GeV and above, which lose energy due to the synchrotron and inverse Compton processes~\\cite{mertsch2011}, but ionisation losses also severely limit the propagation of MeV GCRs. Predicting their local intensities therefore requires rather precise knowledge of the ages and distances of the sources. While some young and nearby sources might be known, catalogues of such sources remain necessarily incomplete, in particular with respect to far away and old sources.\n\nInstead, the distribution of sources can be considered a statistical ensemble, thus opening the path towards a statistical modelling of GCR intensities. Operationally, one draws a set of source distances and ages from the statistical probability density function (PDF). Adding up their intensities results in a prediction for this given realisation of the sources. Repeating this procedure for a large number of realisations, one can estimate the distribution of intensities. The first moment and second central moment of this distribution are the expectation value and the variance. Since the expectation value $\\langle \\psi \\rangle$ could be obtained by averaging over many realizations, it approaches the solution of the GCR transport equation~\\eqref{eq:transport} when the smooth source PDF, from which individual source distance and ages are drawn, is used as the source term $q$. However, as it turns out the statistics of the intensities is markedly non-Gaussian, with the second moment divergent. This is due to the long power-law tails of the intensity PDF. Its asymmetric shape renders the expectation value different from the median and from the maximum of the distribution \\cite{nolan2020}.\n\nIn this \\emph{letter}, we model the intensities of GCR protons and electrons between $1 \\, \\text{MeV}$ and $10 \\, \\text{GeV}$ taking into account the stochasticity induced by the discreteness of sources. Consequently, our predictions will be probabilistic. We will illustrate that the expectation value is a bad estimator for the intensities in individual realisations. For instance, for low enough energies the expectation value is outside the $68\\%$ uncertainty band. Furthermore, its spectral shape is markedly different than the intensity in any individual realisation. Finally, we stress that the expectation value does not reproduce the data either unless an artificial break is added to the source spectrum. Instead, we suggest considering the median of the intensity PDF as a better measure of what a ``typical'' intensity will look like, and the reference intensity around which the intensities from all realisations are distributed. Interestingly, the data for protons and electrons fall squarely within the uncertainty bands. We thus conclude that a model without artificial breaks is to be preferred in explaining the Voyager~1 and AMS-02 data as long as the stochasticity effect is taken into account.\n\n\n\\section{Modelling}\n\\label{sec:stochastic}\n\nEquation \\ref{eq:transport} is solved numerically assuming GCRs propagate within a finite cylindrical region with height $2L\\simeq 8$ kpc and radius $r_{max}\\simeq$ 10 kpc centering around the source. The other parameters of our model are chosen such that the most probable values of the intensity is compatible with the observational data. Specifically, the advection velocity is assumed to have the following profile $u(z)=u_0\\sgn(z)$ with $u_0=16$ km\/s, where $\\sgn(z)$ is the sign function. We assume also the diffusion coefficient of the form $D(E)\\sim \\beta\\gamma^{\\delta}$ as suggested in \\citep{schlickeiser2010} where $\\beta=v\/c$ is the ratio between the particle's speed and the speed of light and $\\gamma$ is the particle's Lorentz factor (note that assuming the diffusion coefficient to scale with rigidity might not qualitatively alter the results \\cite{Note1}). \nRecent analyses of GCRs seem to suggest slightly different values for $\\delta$ depending on whether or not the unstable isotope $^{10}$Be is taken into account \\cite{evoli2019,evoli2020,weinrich2020}. However, the overall results of the local spectra would remain qualitatively unchanged for different values of $\\delta$ if we slightly modified the injection spectra. In the following, we shall adopt $\\delta=0.63$ and normalize the diffusion coefficient such that $D(E=10\\textrm{ GeV})\\simeq 5\\times 10^{28}$ cm$^2$\/s for both species \\citep{evoli2019}. We caution that the diffusion coefficient in the disk and in the halo could in principle be different and so our parametrisation is to be regarded as a suitably defined average. \n\nLow-energy GCRs lose energy mostly due to ionisation interactions with the neutral gas in the disk as discussed above. There are also proton-proton interactions and radiative energy loss at high energies. All the energy loss mechanisms are effective only within the disk of size $2h\\simeq 300$ pc apart from synchrotron and inverse Compton processes. More importantly, the rate of energy loss depends also on the average number density of the hydrogen atoms in the disk. We adopt $n_\\text{H}=0.9$ cm$^{-3}$ corresponding to the surface density of 2 mg\/cm$^{2}$, which is roughly the observed value \\citep{ferriere2001}. The specific form of the energy loss rate are collected from \\citep{schlickeiser2002,mertsch2011,krakau2015,evoli2017} (see also \\cite{Note1}). \n\nWe take into account also the adiabatic energy loss due to advection with the approximation $|\\dot{E}_{ad}|=2pv u_0\\delta(z) \\simeq pv u_0\/(3h)$ \\cite{jaupart2018}. As for the injection spectrum, we shall adopt the following power-law form in momentum down to the kinetic energy of 1 MeV:\n\\begin{eqnarray}\nQ(E)=\\frac{\\xi_{CR}E_{SNR}}{(mc^2)^2\\Lambda\\beta}\\left(\\frac{p}{mc}\\right)^{2-\\alpha},\\label{eq:source_function}\n\\end{eqnarray} \nwhere $\\xi_{CR}=8.7\\%$ and $\\xi_{CR}=0.55\\%$ are the acceleration efficiencies of the source for GCR protons and electrons respectively, $E_{SNR}\\simeq 10^{51}$ erg is the total kinetic energy of the supernova explosion, $m$ is the mass of the GCR species of interest, and\n\\begin{eqnarray}\n\\Lambda=\\int^{p_{max}}_{p_{min}}\\left(\\frac{p}{mc}\\right)^{2-\\alpha}\\left[\\sqrt{\\left(\\frac{p}{mc}\\right)^2+1}-1\\right]\\frac{\\df p}{mc}.\\label{eq:Lambda_Q}\n\\end{eqnarray}\nWe shall take $\\alpha=4.23$ as suggested for the fit at high energies \\cite{evoli2019}. Such a power-law in momentum seems to be preferred from the commonly accepted theory of diffusive acceleration on SNR shocks \\citep{malkov2001,blasi2013}. Even though the extension of the spectrum down to 1 MeV seems questionable, there exist observational evidences of enhanced ionisation rates in the vicinity of SNRs indicating the presence of low-energy GCRs accelerated from these objects \\citep{vaupre2014,gabici2015,phan2020}. Note that we neglect stochastic re-acceleration for simplicity and this process might be examined in future works. \n\n\\begin{figure*}[htpb]\n\\centerline{\n\\includegraphics[width=3.5in, height=2.8in]{fg_jE_p_show_SNR_rh_1000_vA_16_h_151_nH_90_zad_10_Dgam_alpha_423_ep_87_zu_39_in.png}\n\\includegraphics[width=3.5in, height=2.8in]{fg_jE_e_show_SNR_rh_1000_vA_16_h_151_nH_90_zad_10_Dgam_alpha_423_ep_5_zu_39_in.png}\n}\n\\caption{Stochastic fluctuations of GCR protons (left panel) and electrons (right panel) in comparison with data from Voyager~1~\\cite{cummings2016} (blue) and AMS-02~\\cite{AMS2014,AMS2015} (green). The dotted and solid black curves are respectively the expectation values and the median of the intensities. The shaded regions are the 95\\% and 68\\% uncertainty ranges.}\n\\label{fg:stochastic}\n\\end{figure*}\n\nWe have built up a statistical ensemble by generating a large number $N_\\text{r}=2000$ of realisations, in each drawing a large number of sources $N_\\text{s}$ from the spatial distribution following a spiral pattern \\citep{vallee2005} with a radial modulation \\citep{case1998}, as employed in~\\cite{mertsch2011}, and with a homogeneous distributions for the time since injection and for the vertical position of sources. We limit ourselves to $r_i^{(n)}< r_{max}=10$ kpc and the time since injection $\\tau_{i}^{(n)}<\\tau_{max}=10^8$ yr since older and further sources would not contribute significantly. The total number of discrete sources in each realisation could be estimated roughly as $N_{s}=\\mathcal{R}_{s}\\tau_{max}r_{max}^2\/R_d^2\\simeq 1.33\\times 10^6$, where $\\mathcal{R}_{s}\\simeq 0.03$ yr$^{-1}$ is the source rate and $R_d\\simeq 15$ kpc is the radius of the Galactic disk. We adopt $2h_s\\simeq 80 \\, \\text{pc}$ for the vertical extension of sources expected for CCSN \\citep{prantzos2011}.\n\nWe thus obtain an ensemble of intensities \\mbox{$j^{(n)} = v\/(4 \\pi) \\psi^{(n)}$} for the individual source realisations $n$ that we can characterise statistically. For instance, a histogram of these intensities at a specific energy could serve as an estimate of the intensity PDF $p(j)$. Note that the expectation value of the intensity $\\langle j \\rangle = \\int \\mathrm{d} j \\, p(j)$ is equal to the intensity predicted for the smooth source density of Ref.~\\cite{mertsch2011}. We have found $p(j)$ to be extremely non-Gaussian with power-law tails, e.g. $p(j) \\propto j^{-2}$ for $j \\gg \\langle j \\rangle$ at $E=1$ MeV. In fact, these distribution functions are not only asymmetric but they also do not have a well-defined second moment as shown for similar analyses at high energies \\citep{mertsch2011,blasi2012,bernard2012,genolini2017}. We shall, therefore, specify the uncertainty intervals of the intensity using the percentiles as in \\citep{mertsch2011}, e.g. $j_{a\\%}$ is defined via $a\\%=\\int_0^{j_{a\\%}} \\df j \\, p(j)$. The $68$\\% and $95\\%$ uncertainty range of the intensity $j(E)$ are then $\\mathcal{I}_{68\\%}=\\left[j_{16\\%},j_{84\\%}\\right]$ and $\\mathcal{I}_{95\\%}=\\left[j_{2.5\\%},j_{97.5\\%}\\right]$.\n\n\\section{Results and Discussion}\n\\label{sec:results}\n\nWe present in Fig. \\ref{fg:stochastic} the $95\\%$ and $68\\%$ uncertainty bands of the intensities for both GCR protons (left panel) and electrons (right panel) in the energy range from 1 MeV to about 10 GeV together with the expectation values of the intensities and data from Voyager~1~\\citep{cummings2016} and AMS-02~\\citep{AMS2014,AMS2015}. The uncertainty ranges above 100 MeV are quite narrow since the energy loss time and the diffusive escape time are sufficiently large such that the distribution of GCRs inside the Galactic disk become more or less uniform. We note that this will not remain true for GCR electrons of energy above 10 GeV since the energy loss rate for these particles become increasingly larger in this energy range which will result in significant stochastic fluctuations \\citep{atoyan1995,ptuskin2006,mertsch2011,mertsch2018,recchia2019b,manconi2020,evoli2021}.\n\nThe uncertainty ranges broaden for $E\\lesssim100$ MeV until a characteristic energy $E^*$ below which the ratio between the upper and lower limit of the intensities becomes constant. \nSuch a feature emerges from the fact that the Green's function behaves as \\mbox{$\\mathcal{G}(r=0,z=z_\\odot,E,r_i,z,z_i,\\tau_i)\\sim 1\/|\\dot{E}|$} if the propagation time $\\tau_i$ is much larger than the energy loss time ($\\tau_i\\gg \\tau_l(E)= E\/|\\dot{E}|$) which is easily fulfilled for particles of energy below a few tens of MeV. Since $\\tau^{(n)}_i\\gtrsim \\tau_l(E\\lesssim 10 \\textrm{ MeV})$ for $i=\\overline{1,N_{s}}$ in each of the $n$th realization, we expect from Eq. \\ref{eq:stochasticity} that $j^{(n)}(E)\\sim v\/|\\dot{E}|$ for all realizations at sufficiently low energies and, thus, the limits of the uncertainty ranges should become parallel below a characteristic energy. The intensities of GCR protons for several realizations which are within the 68\\% uncertainty range are depicted in Fig.~\\ref{fg:sample} to better illustrate the spectral behaviour at low energies. \n\n\\begin{figure}[ht]\n\\includegraphics[width=3.5in, height=2.8in]{fg_jE_p_show_sample_SNR_rh_1000_vA_16_h_151_nH_90_zad_10_Dgam_alpha_423_ep_87_zu_39_in.png}\n\\caption{Intensities of GCR protons for several realizations (dashed grey curves) around the 68\\% uncertainty range (shaded region). Data points are as in Fig. \\ref{fg:stochastic} and the solid black curve is the median of the intensities.}\n\\label{fg:sample}\n\\end{figure}\n\nNote that a uniform distribution of GCRs will be attained if the number of sources within the diffusion loss length $l_d(E)=\\sqrt{4D(E)\\tau_l(E)}$ in the disk is much larger than one,\n\\begin{eqnarray}\n\\mathcal{R}_s\\tau_l(E) \\frac{2 l_d^3(E)}{3 R_d^2 h_s}\\gg 1 \\, .\n\\end{eqnarray}\nThe characteristic energy $E^*$ could be estimated by setting the LHS of the above inequality to one, which gives $E^*\\simeq 10$ MeV for both species. \n\nInterestingly, apart from the deviation in the energy range below a few GeVs due to solar modulation, the median corresponding to $j_{50\\%}$, the 50\\% percentile of the PDF of the intensities, seems to provide a good fit to the data of Voyager~1 and AMS-02 for both GCR protons and electrons (see Fig.~\\ref{fg:stochastic}). We note that both the expectation values and the median do not strictly correspond the intensities of any particular realizations of sources. At low energies, however, the expectation value is dominated by a few, but rather unlikely realisations with extreme intensities such that $j^{(n)}(E)> j_{84\\%}(E)$ which are outside of the 68\\% uncertainty range. Furthermore, the resulting $\\langle j(E)\\rangle$, which is also the intensities predicted for the smooth source density as stressed above, has a different energy dependence than the \\textit{universal} scaling $j^{(n)}(E)\\sim v\/|\\dot{E}|$ expected at low energies. The median, on the other hand, behaves as $j_{50\\%}(E) \\sim v\/|\\dot{E}|$ and, in fact, the intensities in many realizations seems to closely resemble the spectral behaviour of the median both at low and high energies (see Fig. \\ref{fg:sample}). It is for this reason that the median is to be preferred over the expectation value for the comparison with observational data. \n\nWe note also that the observed proton spectrum seems to have a broader peak than the median of the stochastic model and the observed electron spectrum seems to exceed the median. It is clear, however, that the local ISM should be quite inhomogeneous, and that the observed spectra in an inhomogeneous ISM could be modelled as the weighted average of spectra for different gas densities to provide better agreement with data. We relegate the details of this to future work.\n\nIt is worth mentioning also that the model with the smooth source density could fit data from both Voyager~1 and AMS-02 data under the assumption that the vertical extension of sources is $2h_{s}\\simeq 600$ pc \\citep{schlickeiser2014} expected for type \\rom{1}a SN but these events have a relatively low rate \\citep{prantzos2011}. The stochastic model, however, predicts the observational data to be within the most probable range of the intensities for both GCR protons and electrons with $2h_s\\simeq 80$ pc comparable to the vertical extension of CCSN with a higher rate \\cite{prantzos2011}. More importantly, there is no need to introduce ad hoc breaks both in the injection spectra and the diffusion coefficients. The stochastic model, therefore, seems to be a more appropriate framework for low-energy GCRs.\n\n\\section{Summary and outlook}\n\nIn this \\textit{letter} we have presented results of a modelling of proton and electron spectra between 1 MeV and 10 GeV. Before the advent of the Voyager~1 measurements outside the heliopause, this energy range had received relatively little attention previously due to the fact that solar modulation makes the inference of interstellar spectra difficult. All the models to date assume a smooth source distribution, however, these models do not reproduce the Voyager~1 data unless a spectral break is introduced in the source spectrum. From a microphysical point of view, such a break seems rather unmotivated. \n\nThe smooth approximation is, in fact, not justified since at low energies the energy loss distance becomes shorter than the average source separation. Unlike previous models we therefore considered the discrete nature of sources, modelling the distribution of intensities in a statistical ensemble. We note that the intensity prediction from a smooth density is the ensemble average of this distribution. However, we showed that the ensemble average is not representative of the distribution due to its long power-law tails. For instance, the spectral shapes of the predicted intensities in different realisations are the same below a critical energy. While the expectation value has a very different spectrum at the lowest energies, the median of the distribution does exhibit the same spectral shape. Furthermore, the expectation value is outside the $68\\%$ uncertainty range of the distribution at the lowest energies while the median is by definition always inside. We have shown that the Voyager~1 data fall squarely around the median of the distribution without the need for any unphysical breaks in the source spectra (see \\cite{Note1} for all model parameters).\n\nThe statistical model we have presented here might have interesting implications for other anomalies observed in low-energy GCRs. For instance, it has been shown recently~\\cite{phan2018} that the ionisation rate implied by the Voyager~1 data is much smaller than the ionisation rate directly inferred for a large number of molecular clouds. It would be interesting to see whether the inhomogeneities implied by our statistical model of discrete sources can alleviate this tension. In such a scenario, the Voyager~1 data would need to lie towards the lower edge of the uncertainty band while the molecular cloud measurements would be in regions of systematically higher GCR densities, possibly due to their spatial correlation with source regions. Thanks to our careful statistical model, we will be able to statistically quantify such a model in the future.\\\\\n\nThis project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\u0142odowska-Curie grant agreement No 665850. VHMP is grateful to Marco Kuhlen, Nhan Chau, Ngoc Khanh Vu, and Quang Nam Dam for fruitful discussions and technical support. \n\n\\bibliographystyle{apsrev4-2}\n\n\\section*{The cosmic-ray transport equation}\nWe have adopted the cosmic-ray (CR) transport equation in terms of kinetic energy $E$ for the study of stochasticity. However, it might be more customary to formulate the CR transport equation in terms of momentum $p$. For definiteness, we now lay out the procedure for the transformation. The equation in terms of momentum is \\cite{schlickeiser2002}: \n\\begin{equation}\n\\frac{\\partial f}{\\partial t}+\\frac{\\partial}{\\partial z}\\left(u f\\right) -D\\nabla^2 f + \\frac{1}{p^2} \\frac{\\partial }{\\partial p}\\left(\\dot{p}p^2f\\right)=\\Tilde{q}(r, z, p, t) \\, . \\label{eq:transport-p}\n\\end{equation}\nwhere $f(r,z,p,t)$ is the phase space density of CRs, that is the number of particles per unit volume in configuration and momentum space, $u=u(z)$ is the advection velocity with only component perpendicular to the Galactic disk, $D=D(p)$ is the isotropic and homogeneous diffusion coefficient, and $\\dot{p}$ is the momentum loss rate. The phase space density $f(r,z,p,t)$ is related to the cosmic-ray density $\\psi(r,z,E,t)$, the number of particles per unit volume and energy, as $\\psi(r,z,E,t)=4\\pi p^2 f(r,z,p,t)\/v$ and, similarly, we have $q(r,z,E,t)=4\\pi p^2 \\tilde{q}(r,z,p,t)\/v$ where $v$ is the particle's speed. It is then clear that we could now transform Eq. \\ref{eq:transport-p} into an equation for $\\psi(r,z,E,t)$ by performing the change of variable from $p$ to $E$. We note also that $\\dot{E}=\\dot{p}v$ and, in fact, the standard literature mostly quote the formulae for the energy loss rate (even when Eq. \\ref{eq:transport-p} is adopted for the study of CRs \\cite{strong1998,schlickeiser2002}).\n\n\\section*{Energy loss rate}\n\nCosmic-ray protons lose energy mostly due to ionization and proton-proton interaction with the gas in the Galactic disk. The combined energy loss rate for these two processes could be written as \\cite{schlickeiser2002,krakau2015}: \n\\begin{eqnarray}\n&&\\dot{E}\\simeq \\textrm{H}(|z|-h) 1.82\\times 10^{-7}\\left(\\frac{n_{\\textrm{H}}}{1\\textrm{ cm}^{-3}}\\right)\\nonumber\\\\\n&&\\qquad\\times\\left[(1+0.185\\ln\\beta)\\frac{2\\beta^2}{10^{-6}+2\\beta^3}+2.115\\left(\\frac{E}{1\\textrm{ GeV}}\\right)^{1.28}\\left(\\frac{E}{1\\textrm{ GeV}}+200\\right)^{-0.2}\\right]\\textrm{ eV\/s},\n\\end{eqnarray}\nwhere $H(|z|-h)$ is the Heaviside function which indicates that these energy loss mechanisms are only effective within the height $h$ of the disk, $n_{\\mathrm{H}}$ is the density of hydrogen atoms in the disk, $\\beta$ is the ratio between the particle's speed and the speed of light, and $E$ is the kinetic energy of the particle. \n\nBelow a few GeV, the main mechanisms for energy loss of CR electrons are ionization interaction and bremsstrahlung radiation in the Galactic disk. At higher energy, these particles lose energy more effectively not only in the disk but also in the CR halo due to synchrotron radiation and inverse Compton scattering. The energy loss rate could then be parametrized as \\cite{schlickeiser2002,mertsch2011,evoli2017}: \n\\begin{eqnarray}\n&&\\dot{E}\\simeq 10^{-7}\\left(\\frac{E}{1\\textrm{ GeV}}\\right)^2+\\textrm{H}(|z|-h) 1.02\\times 10^{-8}\\left(\\frac{n_\\mathrm{H}}{1\\textrm{ cm}^{-3}}\\right)\\nonumber\\\\\n&&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\times\\left\\{18.495+2.7\\ln\\gamma+0.051\\gamma\\left[1+0.137\\left(\\ln\\gamma+0.36\\right)\\vphantom{^{\\frac{^\\frac{}{}}{}}}\\right]\\vphantom{^{\\dfrac{}{}}}\\right\\}\\textrm{ eV\/s},\n\\end{eqnarray}\nwhere $\\gamma$ is the Lorentz factor.\n\n\\section*{Parameters for the stochastic model}\nIn Tab. \\ref{tab:parameters}, we briefly summarise all the parameters adopted in order for the stochastic uncertainty bands to encompass the data from both Voyager 1 and AMS-02. Most of the parameters including the diffusion coefficient and the injection spectra are constrained from the fits at high energies \\cite{evoli2019}. \n\nIn fact, the two parameters that only the low-energy spectra are sensitive to are the number density of hydrogen atoms and the advection speed perpendicular to the disk. In fact, the value of the advection speed has also been given in the fits at high energy but it might vary slightly around 10 km\/s depending on the model and the species of CRs considered \\cite{evoli2019,mertsch2020}. We note also that $n_{\\mathrm{H}}$ is not completely free as the surface density of the disk for our Galactic neighborhood is externally constrained to be around 2 g\/cm$^{2}$ which is quite consistent with the value adopted for our fits \\cite{ferriere2001}. \n\n\\begin{table}[h!]\n\\centering\n\\caption{Externally constrained and fitted parameters for the stochastic model for both CR protons and electrons in the case for the diffusion coefficient scaling with Lorentz factor as presented in the main text.}\n\n\n\t\\label{tab:parameters}\n\t\\begin{tabular}{|c|c|c|r|}\n\t\t\\hline\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\shortstack{Fitted parameters\\\\ for low-energy CRs}} & $n_{\\mathrm{H}}$ & Gas density in the disk & 0.9 cm$^{-3}$\\\\\n\t\t\\cline{2-4}\n\t\t& $u_0$ & Advection speed & 16 km\/s\\\\\n\t\t\\hline\n\t\t\\multirow{10}{*}{\\shortstack{Constrained parameters\\\\ from high-energy CRs}} & $R_d$ & $\\qquad$ Radius of the Galactic disk $\\qquad$ & 15 kpc \\\\\n\t\t\\cline{2-4}\n\t\t& $H$ & Height of the CR halo & 4 kpc\\\\\n\t\t\\cline{2-4}\n\t\t& $2h$ & Height of the gas disk for energy loss & 300 pc\\\\\n\t\t\\cline{2-4}\n\t\t& $2h_s$ & Height of the disk of sources & 80 pc\\\\\n\t\t\\cline{2-4}\n\t\t& $D(E=10\\textrm{ GeV})$ & Diffusion coefficient at 10 GeV & $5\\times 10^{28}$ cm$^2$\/s\\\\\n\t\t\\cline{2-4}\n\t\t& $\\delta$ & Index of the diffusion coefficient & 0.63\\\\\n\t\t\\cline{2-4}\n\t\t& $\\mathcal{R}_s$ & Source rate & 0.03 yr$^{-1}$\\\\\n\t\t\\cline{2-4}\n\t\t& $\\xi_{CR}^{(p)}$ & Proton acceleration efficiency & 8.7\\%\\\\\n\t\t\\cline{2-4}\n\t\t& $\\xi_{CR}^{(e)}$ & Electron acceleration efficiency & 0.55\\%\\\\\n\t\t\\cline{2-4}\n\t\t& $\\alpha$ & Index of the injection spectra & 4.23\\\\ \n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\nWe note that the parameters in Tab. \\ref{tab:parameters} have been obtained for the diffusion coefficient of the form as presented in the main text $D(E)\\sim\\beta \\gamma^{\\delta}$ which is expected when the magneto-static approximation is relaxed meaning the Alfv\\'en speed is no longer negligible in comparison to the particle's speed in the resonance condition of wave-particle interaction (see e.g. \\cite{schlickeiser1999,schlickeiser2010} for more technical details). In a broader sense, it is probably fair to admit that there remain significant uncertainties since there is currently no direct observations of the mean-free path in the interstellar medium. In order to bracket this uncertainty, we have also repeated our computation with a diffusion coefficient that has a power law dependence on rigidity, $D(E)\\sim \\beta R^\\delta$ where $R$ is the particle's rigidity. We have found that this would not qualitatively change our results since the break in $D(E)$ below roughly 1 GeV does not significantly affect the spectrum of CR protons at low energies as the transport in this regime is dominated by energy loss. For CR electrons, the rigidity or Lorentz factor dependent diffusion coefficients are roughly the same down to 1 MeV. We present also in Fig.~\\ref{fg:Drig} the fits for the case of a rigidity-dependent diffusion coefficient with slightly different values for the advection speed $u_0$ and the number density of hydrogen atoms $n_\\mathrm{H}$ (see Tab.~\\ref{tab:parameters2} for the complete list of parameter values in this case)\n\n\\begin{figure}[h]\n\\centerline{\n\\includegraphics[width=3.2in, height=2.7in]{fg_jE_p_show_SNR_rh=1000_vA=18_h=151_nH=70_zad=10_Drig_alpha=423_ep=87_zu=39_in.png}\n\\includegraphics[width=3.2in, height=2.7in]{fg_jE_e_show_SNR_rh=1000_vA=18_h=151_nH=70_zad=10_Drig_alpha=423_ep=5_zu=39_in.png}}\n\\caption{Stochastic fluctuations of GCR protons (left panel) and electrons (right panel) in comparison with data from Voyager~1~\\cite{cummings2016} (blue) and AMS-02~\\cite{AMS2014,AMS2015} (green) for the case of rigidity dependent diffusion coefficient. The dotted and solid black curves are respectively the expectation values and the median of the intensities. The shaded regions are the 95\\% and 68\\% uncertainty ranges.}\n\\label{fg:Drig}\n\\end{figure}\n\n\\begin{table}[h!]\n\\centering\n\\caption{Externally constrained and fitted parameters for the stochastic model for both CR protons and electrons in the case for the diffusion coefficient scaling with rigidity.}\n\n\n\t\\label{tab:parameters2}\n\t\\begin{tabular}{|c|c|r|}\n\t\t\\hline\n\t\t\\hline\n\t\t\\multirow{2}{*}{\\shortstack{Fitted parameters\\\\ for low-energy CRs}} & $n_{\\mathrm{H}}$ & 0.7 cm$^{-3}$\\\\\n\t\t\\cline{2-3}\n\t\t& $u_0$ & 18 km\/s\\\\\n\t\t\\hline\n\t\t\\multirow{10}{*}{\\shortstack{Constrained parameters\\\\ from high-energy CRs}} & $R_d$ & $\\qquad$ 15 kpc \\\\\n\t\t\\cline{2-3}\n\t\t& $H$ & 4 kpc\\\\\n\t\t\\cline{2-3}\n\t\t& $2h$ & 300 pc\\\\\n\t\t\\cline{2-3}\n\t\t& $2h_s$ & 80 pc\\\\\n\t\t\\cline{2-3}\n\t\t& $D(E=10\\textrm{ GeV})$ & $5\\times 10^{28}$ cm$^2$\/s\\\\\n\t\t\\cline{2-3}\n\t\t& $\\delta$ & 0.63\\\\\n\t\t\\cline{2-3}\n\t\t& $\\mathcal{R}_s$ & 0.03 yr$^{-1}$\\\\\n\t\t\\cline{2-3}\n\t\t& $\\xi_{CR}^{(p)}$ & 8.7\\%\\\\\n\t\t\\cline{2-3}\n\t\t& $\\xi_{CR}^{(e)}$ & 0.55\\%\\\\\n\t\t\\cline{2-3}\n\t\t& $\\alpha$ & 4.23\\\\ \n\t\t\\hline\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\n\\bibliographystyle{apsrev4-2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzdtce b/data_all_eng_slimpj/shuffled/split2/finalzzdtce new file mode 100644 index 0000000000000000000000000000000000000000..6f35e44dfa4206c34cf9d9f400f49aa0f349fefd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzdtce @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\n\nThe folding of paper or other thin elastic sheets is one of the many examples of energy focusing in the physical world.\nStarting in the late 90's, there has been a lot of interest in this problem in the physics\ncommunity\n\\cite{PhysRevE.71.016612,PhysRevLett.80.2358,Cerda08032005,RevModPhys.79.643,CCMM,PhysRevLett.87.206105,PhysRevLett.78.1303,Lobkovsky01121995,PhysRevLett.90.074302}. In\nparticular the crumpling of paper (i.e.~the crushing of a thin elastic sheet\ninto a container whose diameter is smaller than the size of the sheet) which\nresults in complex folding patterns has drawn a lot of attention. It has been\nconjectured that the energy density per thickness $h$ of such a folding pattern\nscales with $h^{5\/3}$. One major contribution in the rigorous analysis of this\nproblem is \\cite{MR2358334}, building on ideas from \\cite{MR2023444}.\\\\\nHere we focus on approximately conical deformations of thin elastic\nsheets, that can be viewed as (one kind of) building blocks of crumpled\ndeformations. One example for this is a sheet that is pushed into a hollow cylinder, such that the indentation of the sheet is small.\nThe resulting structure is called a \\emph{d-cone} (developable cone). In the\nphysics literature, it has been discussed e.g.~in\n\\cite{PhysRevLett.80.2358,Cerda08032005,PhysRevE.71.016612,RevModPhys.79.643}. There\nare several remarkable features of the d-cone, one of which \n \n \n \n \n \n \nis that the tip of the d-cone consists of a crescent-shaped ridge where\ncurvature and elastic stress focus. In numerical simulations it was found that\nthe radius of the crescent $R_{\\rm cres.}$ scales with the thickness of the\nsheet $h$ and the radius of the container $R_{\\rm cont.}$ as $R_{\\rm\n cres.}\\sim h^{1\/3}R_{\\rm cont.}^{2\/3}$. This dependence on the container\nradius of the shape of the region near the tip is not fully understood\n\\cite{RevModPhys.79.643}. As argued in this latter reference, it cannot be\nexplained by an analysis of the dominant contributions to the elastic energy,\nwhich are: The bending energy from the region far away from the center, which\nis well captured by modeling the d-cone as a developable surface there; and\nthe bending and stretching energy part from a core region of size $O(h)$\nwhere elastic strain is not negligible. The result of this (non-rigorous)\nargument is an energy scaling $E\\sim h^2 (C_1|\\log h|+C_2)$. This is a natural\nguess -- the situation here bears some resemblance to vortices in the\nGinzburg-Landau model, where this is the right\nenergy scaling \\cite{MR1269538}. \\\\ \nIn \\cite{2012arXiv1208.4298M,BK1} the scaling of the elastic energy of a d-cone with respect to its thickness $h$ has been analyzed in a rigorous setting. The result from \\cite{2012arXiv1208.4298M} is\n\\begin{equation}\nh^2 (C_1|\\log h|-C_2\\log|\\log h|)\\leq {\\mathcal E}_h \\leq h^2 (C_1|\\log h|+C_3)\\,.\n\\label{eq:lowb}\n\\end{equation}\nThe lower bound does not achieve the conjectured scaling behaviour, and it seems that this claim can not be proved\nwith the methods used in \\cite{2012arXiv1208.4298M}. \\\\\n\\\\\nHere we consider another situation which involves the regularization of\nan isometric cone through the higher order bending energy. The main difference\nwith the (general) d-cone is that here the underlying cone is a surface of revolution. Hence it is meaningful to study the problem of the competition\nbetween bending and stretching energies in a radially symmetric setting. This\nmakes it possible to use ODE methods in addition to energy methods.\nWe will show that a scaling result analogous to the\none above without the $\\log\\log h$ terms on the left hand side holds in this\nsimpler setting. \\\\\n\\\\\nThe setting is the following: to create an approximately conical deformation of an\nelastic sheet, we cut out a sector of angle $\\beta$ and glue the edges of this\nsector back together. This situation has been investigated numerically in \\cite{2006PhRvE..73d6604L},\nwhere it is called ``regular cone''. \nIn this situation, radially symmetric deformations are admissible -- in\ncontrast to the\ncase of the d-cone, where the boundary conditions are not radially symmetric.\nTo the best of our knowledge, it is not known whether the global minimizers of\nthe ``regular cone'' are\nradially symmetric. We nonetheless believe that a careful study of minimizers\nwithin the class of radially symmetric deformations will help to understand the\nstructure of local and global minimizers as well as the local and global\nstability of possible radially symmetric minimizers.\nSince we are interested in the asymptotic behaviour, we consider a sheet of\ninfinite radius with free boundaries (after suitable renormalization of the\nenergy, see below).\\\\\n\\\\\nApart from the restriction to radially symmetric configurations, we make one\nmore simplification in comparison to the situation in \\cite{2012arXiv1208.4298M}:\nWe use the so-called von-K\\'arm\\'an approximation of non-linear elasticity\n\\cite{ciarlet1997theory,MR2210909}. This means that the out-of-plane component of the\ndeformation is supposed to be of the order $\\varepsilon\\ll 1$, and the size of the removed\nsector as well as the in-plane deformation are of order $\\varepsilon^2$. All terms in the elastic energy of order $\\varepsilon^k$, $k>4$, and of order $h^2\\varepsilon^k$, $k>2$ are discarded.\\\\\n\\\\\nAs we will explain in Section \\ref{model}, these considerations lead to the\ndefinition of the free elastic energy density\n\\begin{equation*}\n\\rho^{\\rm el.}_\\he= (\\hat w^2-1+\\hat\n u')^2+\\left(\\frac{\\hat u}{r}\\right)^2+\\lambda^2\\left(\\hat\n w'^2+\\frac{\\hat w^2}{r^2}\\right)\n\\end{equation*}\nwhere $\\lambda=h\/\\varepsilon$ and the deformation of the sheet is given as a map from spherical to\ncylindrical coordinates by\n\\begin{equation}\n(r,\\varphi)\\mapsto \\left(r+\\frac{\\varepsilon^2}{2}(\\hat u-r),\\sqrt{1+\\varepsilon^2}\\varphi,\\varepsilon W\\right)\\label{eq:12}\n\\end{equation}\nwith $W'=\\hat w$. The renormalized energy functional is\n\\begin{equation}\n\\begin{array}{rrl}\\hat E_\\lambda:& {\\mathcal W}& \\to {\\mathbb R}\\cup\\{+\\infty\\}\\\\\n&(\\hat u,\\hat w)&\\mapsto\n\\lim_{R\\to\\infty}\\int_0^R r \\d r\\left( \\rho^{\\rm el.}_\\he(r)-\\lambda^2\\frac{\\psi(r\/\\lambda)^2}{r^2}\\right)\\end{array}\n\\label{introen}\n\\end{equation}\nwhere \n\\begin{align*}\n {\\mathcal W}=\\Big\\{&(\\hat u,\\hat w)\\in W^{1,2}_{\\rm loc}((0,\\infty),{\\mathbb R}^2):\\,\\int_0^1 r \\d r \\rho^{\\rm el.}_\\he(r)<\\infty\\Big\\}\\,,\n\\end{align*}\nand $\\psi$ is some cutoff\nfunction with $\\psi(r)=0$ for $r$ close to $0$ and $\\psi(r)=1$ for $r\\geq\n1$. \nWe will show in Lemma \\ref{pro:boundary} that the condition $\\int_0^1 r \\d r\n\\rho^{\\rm el.}_\\he(r)<\\infty$ implies $\\hat u(0)=\\hat w(0)=0$ and thus the deformation\n\\eqref{eq:12} is continuous at the origin for all $(\\hat u,\\hat w)\\in {\\mathcal W}$.\\\\\n\nThe aim of the present contribution is to prove\n\\begin{theorem}\n\\label{mainthm}\nThe functional $\\hat E_\\lambda$ from eq.~\\eqref{introen} is well defined and bounded\nfrom below. It possesses minimizers $(\\hat u,\\hat w)$ in ${\\mathcal W}$ with $\\hat w\\geq\n0$ and $\\hat E_\\lambda(\\hat u,\\hat w)<\\infty$.\nFurthermore, each minimizer $(\\hat u,\\hat w)$ with $\\hat w\\geq 0$\n satisfies\n\\begin{align*}\n\\hat u(r)=&\\frac{\\lambda}{2r}+o(\\exp(-\\sigma\\sqrt{r\/\\lambda}))\\\\\n\\hat w(r)=&1+o(\\exp(-\\sigma\\sqrt{r\/\\lambda}))\\,\n\\end{align*}\nas $r\\to \\infty$ for any $\\sigma<2$.\n\\end{theorem}\n\n\nAs a side product of the proof of Theorem \\ref{mainthm}, we will get a lower\nbound for the elastic energy when the radius of the elastic sheet in the\nreference configuration is assumed to be finite. This lower bound is better than\nthe analogous one from eq.~\\eqref{eq:lowb} in that the $\\log\\log$-terms are not\npresent. \nTo give an idea how this ``improved'' lower bound comes about, let \n\\begin{equation}\nI_\\lambda=\\int_0^1 r \\d r\\rho^{\\rm el.}_\\he\\,.\\label{eq:11}\n\\end{equation}\nThe first step to establish the lower\nbound in the present setting is the right renormalization of the elastic energy density.\nWe expect a logarithmic divergence in $\\lambda$ of $\\lambda^{-2}I_\\lambda$ as $\\lambda\\to 0$. Thus we\nmake the replacement\n\\[\n\\rho^{\\rm el.}_\\he(r)\\to\\rho^{\\rm el.}_\\he(r)-\\lambda^{2}\\frac{\\psi(r\/\\lambda)^2}{r^2}\n\\]\nThe key step is now to find a change of variables that makes it obvious that\n\\begin{equation}\n\\label{eq:2}\n\\int_0^1 r\\d r\\left(\\rho^{\\rm el.}_\\he(r)-\\lambda^{2}\\frac{\\psi(r\/\\lambda)^2}{r^2}\\right)\n\\end{equation}\nis bounded from below by some constant times $\\lambda^2$.\nAs we will see in Section \\ref{existence}, such a change of variables does\nexist,\nand will leave us only with\nmanifestly positive terms in the renormalized energy eq.~\\eqref{eq:2} plus some\ndivergence-like term that will be estimated in a suitable manner in Proposition\n\\ref{cor:welldef}. Thus we get the sought-for lower bound\n\\begin{equation}\n\\lambda^{-2}I_{\\lambda}\\geq |\\log \\lambda|-C \\,.\n\\label{eq:lowb2}\n\\end{equation}\nThis paper is organised as follows: In Section \\ref{model}, we motivate and\ndefine our model. In Section \\ref{existence} we establish a lower bound for the\nrenormalized energy and prove the existence of\nminimizers of the elastic free energy functional. In a\nremark at the end of that section,\nwe will discuss a pathology of the model presented here. In section\n\\ref{decay1}, we use stable manifold theory to show that minimizers converge to the conical configuration\nat infinity.\\\\\n\\\\\n{\\bf Notation.} In this paper, the letter $C$ stands for numerical constants\nthat are independent of all the other variables. Its value may change within the\nsame equation. In section \\ref{model}, we will choose a cutoff function\n$\\psi\\in C^\\infty([0,\\infty))$, that we have already mentioned above. The\ncutoff function $\\psi$ will then be fixed for the rest of the\npaper. We will not indicate the dependence of constants on this choice of\n$\\psi$. Whenever we speak of functions $f\\in W^{1,2}_{\\rm loc.}(I)$ for some $I\\subset{\\mathbb R}$, it will be tacitly\nunderstood that we mean its continuous representative.\n\n\n\\section{The model}\n\\label{model}\n{\\bf Cutting out a sector, glueing the edges back together.} \nFor small $\\varepsilon>0$ let $\\beta^{(\\varepsilon)}$ be defined by $2\\pi\/(2\\pi-\\beta^{(\\varepsilon)})=\\sqrt{1+\\varepsilon^2}$,\nand let\n\\begin{equation*}\nB^{(\\varepsilon)} ={\\mathbb R}^2\\setminus\\left\\{(x_1,x_2):x_2<0 \\frac12$. \nLet $\\alpha > 0$ and\n $f = 2 e^{-\\alpha t} \\sin e^{t\/2}$, $g= - e^{-\\alpha t } e^{-t\/2} \\cos e^{t\/2}$. Then $f$, $e^t g+f'$ and $g'$ are in $L^2((0, \\infty))$\n but $e^{\\beta t} g \\notin L^2((0, \\infty))$ if $\\beta \\ge \\frac12 + \\alpha$. \n \n \n \\begin{proof} Let \n \\begin{equation}\nG(t) := - \\int_t^T \\d s \\, e^{-s\/2} (e^{-s\/2} f'(s) + e^{s\/2} g) + \\int_t^T \\d s \\, e^{-s} f(s) - e^{-t} f(t). \n\\end{equation}\nNote that both integrals exist (even for $T= \\infty$) since $ e^{-s\/2} f'(s) + e^{s\/2} g \\in L^2$, $f \\in L^2$ and $e^{-s\/2} \\in L^2$. \nMoreover $G$ is absolutely continuous and for a.e.\\ $t$ we have\n\\begin{equation}\nG'(t) = e^{-t} f'(t) + g(t) - e^{-t} f(t) - (e^{-t} f(t))' =g(t).\n\\end{equation}\nThus $G \\in W^{2,2}_{loc}(I)$ and $G'' = g'$.\nMoreover by the Cauchy-Schwarz inequality\n\\begin{equation}\n|G(t)| \\le e^{-t\/2} \\|e^{-s\/2} f' + e^{s\/2} g\\|_{L^2} + e^{-t} \\|f\\|_{L^2} + e^{-t} f(t)\n\\end{equation}\nand this implies that\n\\begin{equation} \\label{eq:GL2}\n\\|G \\|_{L^2} \\le \\|e^{-s\/2} f'(s) + e^{s\/2}\\|_{L^2} + 2 \\|f\\|_{L^2}.\n\\end{equation}\n\nFor $a \\in (0, T-1)$ we use the interpolation inequality\n\\begin{equation}\n\\|G' \\|_{L^2((a, a+1)}^2 \\le C \\left( \\|G\\|_{L^2((a, a+1))}^2 + \\|G''\\|_{L^2((a,a+1))}^2 \\right)\n\\end{equation}\nFor a proof see, e.g., ~\\cite{gilbarg2001elliptic} or derive a contradiction from the \nassumptions $\\|G'_j\\|_{L^2((0,1))} = 1$ and $\\|G_j\\|_{L^2((0,1))}^2 + \\|G''_j\\|_{L^2((0,1))}^2 \\to 0$. \nPassage to the limit $a \\downarrow 0$ and $a \\uparrow T-1$ (if $T < \\infty$) shows that the inequality\nalso holds for $a= 0$ and $a=T-1$ (if $T < \\infty$). If $T = \\infty$ we sum the inequalities for $a \\in {\\mathbb N}$. \nIf $T < \\infty$ we denote by $[T]$ the integer part of $T$ and sum the inequalities for $a = 0, \\ldots, [T] - 1$\nand $a= T-1$. Since at most two of the intervals $(a, a+1)$ overlap we get\n\\begin{equation}\n\\|G' \\|_{L^2}^2 \\le 2 C \\left( \\|G\\|_{L^2}^2 + \\|G''\\|_{L^2}^2 \\right) \n\\end{equation}\nSince $G' =g$ and $G'' = g'$ the estimate for $\\|g \\|_{L^2}$ follows from \\eqref{eq:GL2}.\nThe estimate for $e^{-t} f'$ follows from the triangle inequality since\n$e^{-t} f' = e^{-t\/2} (e^{-t\/2} f' + e^{t\/2} g) - g$. \n \\end{proof}\n \n\nWe would like to apply the interpolation result with $f = \\tilde u$ and $g = 2 \\tilde w + \\tilde w^2$. \nWe have $g' = 2 (1 + \\tilde w) \\tilde w'$ and $E^{+,R}$ controls only the $L^2$ norm of $\\tilde w'$ \nand not directly the $L^2$ norm of $g'$. We thus simulataneously prove an $L^\\infty$ bound for $\\tilde w$\nand an $L^2$ bound for $g$. \n\n\\begin{lemma} \\label{lem:boundwg}\nThere exists a constant $C$ with the following property. If $R > 1$ and $(u,w) \\in W^{1,2}_{\\rm loc}([1, R))$ with\n$E(R):= E^+(u,w;(1,R)) < \\infty$ then\n\\begin{align}\n\\sup_{[1, R]} |w| & \\le C (1 + E^{1\/2}(R)), \\label{eq:winfty} \\\\\n\\int_1^R \\frac{dr}{r} \\left[ (2 w + w^2)^2 + u'^2 \\right] & \\le C (1 + E(R)^2),\n\\label{eq:gL2} \\\\\nR^{-1\/2} |u(R)| & \\le C (1 + E(R)). \\label{eq:decayu}\n\\end{align}\n\\end{lemma}\n\n\\begin{proof} Let $R = e^T$, $\\tilde u(t) = u(e^t)$, $\\tilde w(t) = w(e^t)$ and $g = 2 \\tilde w + \\tilde w^2$. \nTo prove \\eqref{eq:winfty} we will assume in addition that $w \\in L^\\infty((1,R))$. This is no loss \nof generality since by the Sobolev embedding theorem $w \\in L^\\infty((1,R - \\varepsilon))$ for all $\\varepsilon$\npositive. If we have \\eqref{eq:winfty} with $R - \\varepsilon$ instead of $R$ for all $\\varepsilon > 0$ we can then consider the limit\n$\\varepsilon \\downarrow 0$ to obtain the estimate for $R$. \n\nLet\n\\begin{equation}\nM := \\sup_{[0,T]} |\\tilde w| = \\sup_{[1,R]}|w|.\n\\end{equation}\nIf $M < 4$ there is nothing to show. We may thus assume $M \\ge 4$. Then\n\\begin{equation} \\label{eq:boundgprime}\n\\frac12 M^2 \\le \\sup g, \\quad |g'| \\le | 2 (1 + \\tilde w) \\tilde w'| \\le 4 M |\\tilde w'|.\n\\end{equation}\nBy \\eqref{eq:trafonew1} we have\n\\begin{equation}\nE(R) = \\int_0^T \\d t \\left[ \\left( e^t g + \\tilde u' \\right)^2 + \\tilde u^2 +\\tilde w'^2 \\right]. \n \\end{equation}\nThus arguing as in Lemma \\ref{lem:estsupH1a} and using the interpolation estimate with $f = \\tilde u$ we get \n\\begin{align}\n\\frac14 M^4 & \\le \\inf_{[0,T]} g^2 + ( \\sup_{[0,T]} g^2 - \\inf_{[0,T]} g^2) \n\\le \\frac{1}{T} \\int_0^T \\d t \\, g^2 + \\int_0^T \\d t \\, g^2 + \\int_0^T \\d t \\, g'^2 \\\\\n& \\le 2 C \\int_0^T \\d t \\, \\left[(e^{-t\/2} \\tilde u' + e^{t\/2} g)^2 + \\tilde u^2\\right] + (2 C + 1) \\int_0^T \\d t \\, g'^2 \\\\\n& \\le C E(R) + 16 M^2 (2 C+1) E(R) \\le \\frac18 M^4 + C ( 1 + E(R)^2),\n\\end{align}\nwhere we used Young's inequality $a b \\le \\frac18 a^2 + 2 b^2$. \nThis implies \\eqref{eq:winfty}.\n\nNow \\eqref{eq:gL2} follows directly from the interpolation estimate, \\eqref{eq:winfty} and \n \\eqref{eq:boundgprime}. Indeed we have\n \\begin{align}\n& \\int_1^R \\frac{\\d r}{r} (2 w + w^2)^2 = \\int_0^T \\d t \\, g^2 {\\nonumber} \\\\\n \\le & C \\int_0^T \\d t \\, \\left[ (e^{-t\/2} \\tilde u' + e^{t\/2} g)^2 + \\tilde u^2 + g'^2 \\right] {\\nonumber} \\\\\n\\le & C (1 + E(R)) \\, E(R).\n\\end{align}\nand the bound for $u'$ follows by the triangle inequality since $\\int_1^R r \\d\nr (2 w + w^2 + u')^2 \\le E(R)$ and $r^{-1}\\leq r$ on $[1,\\infty)$.\n\nUsing again the interpolation inequality and the $L^\\infty$ bound for $w$ we get\n\\begin{align}\n& R^{-1} u^2(R) \\le \\sup_{[0,T]} e^{-t} \\tilde u^2 \n= \\inf_{[0,T]} e^{-t} \\tilde u^2 + ( \\sup_{[0,T]} e^{-t} \\tilde u^2 - \\inf_{[0,T]} e^{-t} \\tilde u^2) {\\nonumber} \\\\\n \\le & \\frac{1}{T} \\int_0^T \\d t \\, e^{-t} \\tilde u^2 + 2 \\int_0^T \\d t \\, \\left( e^{-t} \\tilde u^2 \\right)' {\\nonumber} \\\\\n \\le & 3 \\int_0^T \\d t \\, e^{-t} \\tilde u^2 + \\int_0^T \\d t \\, \\tilde u^2 + \\int_0^T \\d t \\, e^{-2t} \\tilde u'^2 \n \\le C ( 1 + E(R)^2).\n\\end{align}\nTaking the square root we get \\eqref{eq:decayu}.\n\\end{proof}\n\n\\begin{lemma}\n\\label{cor:welldef}\nThere exists a constant $C$ and $R_0 \\ge 1$ such that for all $R \\in [R_0, \\infty)$ and all $(u,w) \\in {\\mathcal W}$ we have\n\\begin{equation} \\label{eq:lowerER}\nE^R(u,w) \\ge \\frac12 E^{+,R}(u,w) - C.\n\\end{equation}\nMoreover for all $(u,w) \\in {\\mathcal W}$ the limit\n\\begin{equation}\nE(u,w) := \\lim_{R \\to \\infty} E^R(u,w)\n\\end{equation}\nexists in ${\\mathbb R} \\cup \\{ \\infty\\}$ and\n\\begin{equation}\nE(u,w) < \\infty \\quad \\Longleftrightarrow \\quad E^+(u,w) < \\infty.\n\\end{equation}\nIn addition, if $E(u,w) < \\infty$ then\n\\begin{equation} \\label{eq:identE+}\nE(u,w) = E^+(u,w) + u(1) + \\frac14 - \\int_0^1r \\d r \\frac{\\psi^2}{r^2}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nThe starting point is the relation \\eqref{eq:ERrenorm}\n\\begin{equation} \\label{eq:ERrenorm2}\nE^R(u,w) = E^{+,R}(u,w) + u(1) - \\frac{u(R)}{R} + \\frac14 (1 - R^{-2}) - \\int_0^1 r \\d r \\, \\frac{\\psi^2}{r^2}\n\\end{equation}\nNote that with the notation of Lemma \\ref{lem:boundwg} we\nhave\n\\begin{equation}\nE^{+,R}(u,w) = E(R) + X_1, \\quad \\mbox{where } X_1 := E^+(u,w;(0,1)) \\ge 0.\n\\end{equation}\nBy \\eqref{eq:decayu}\n\\begin{equation} \\label{eq:estuR}\n\\frac{|u(R)|}{R} \\le R^{-1\/2} C (1 + E(R)) \\le \\frac14 E(R) + C\n\\end{equation}\nif $R \\ge R_0 := 4C$. \n\nLet $I=(0,1)$. By Lemma \\ref{lem:estsupH1a}\n\\begin{align}\n|\\hat u(1)|^2 & \\le 2 \\| \\hat u \\|_{L^2(I;\\d r\/r)} \\, \\| \\hat u' \\|_{L^2(I; r \\d r)} \\\\\n&\\le 2 \\| \\hat u \\|_{L^2(I;\\d r\/r)} \\left( \n \\| \\hat u' + \\hat w^2 - 1\\|_{L^2(I; r \\d r)} + \n\\| \\hat w^2 - 1 \\|_{L^2(I; r \\d r)} \\right)\\\\\n& \\le X_1 + 2 X_1^{1\/2} \\| \\hat w^2 - 1 \\|_{L^2(I; r \\d r)}.\n\\end{align}\nAgain by Lemma \\ref{lem:estsupH1a} we have\n$\\sup_{[0,1]} \\hat w^2 \\le X_1$. Thus\n\\begin{equation}\n \\| \\hat w^2 - 1 \\|_{L^2(I; r \\d r)} \\leq \\sup |\\hat w^2 - 1| \\le (1 + X_1)\n\\end{equation}\nand therefore $|\\hat u(1)|^2 \\le (2 X_1^{1\/2} + X_1 + 2 X_1^{3\/2})$.\nUsing Young's inequality $ab \\le \\frac13 a^3 + \\frac23 b^{3\/2}$ first with\n$(a,b)=(X_1^{1\/2},1)$ and then with $(a,b)=(1,X_1)$\nwe get $|\\hat u(1)|^2 \\le 4 (1 + X_1^{3\/2})$.\n Finally we get\n\\begin{equation} \\label{eq:estu1}\n|u(1)| = |\\hat u(1) -\\frac12| \\le 3 + 2 X_1^{3\/4} \\le 3 + \\frac14 8^4 + \\frac{1}{4} X_1,\n\\end{equation}\nwhere we used $a b \\le \\frac34 a^{4\/3} + \\frac14 b^4$ with $a = \\frac14 X^{3\/4} $ and $b = 8$.\n\nCombining this with \\eqref{eq:estuR} and \\eqref{eq:ERrenorm2}\nand using that $X_1 \\le E^{+,R}(u,w)$ and $E(R) \\le E^{+,R}(u,w)$ we obtain\n\\eqref{eq:lowerER} (for $R \\ge R_0$). \n\n\nNow if $E^+(u,w) = \\infty$ then it follows from \\eqref{eq:lowerER} that \n$\\lim_{R \\to \\infty} E^{R}(u,w) = \\infty$. Assume now\n$E^+(u,w) < \\infty$. Since $E^{+,R}(u,w) \\le E^+(u,w)$ it follows from Lemma \\ref{lem:boundwg}\nthat $\\lim_{R \\to \\infty} u(R)\/R = 0$. In view of \n \\eqref{eq:ERrenorm2} we deduce that that $\\lim_{R \\to \\infty} E^{R}(u,w)$ exists and\n \\begin{equation}\nE(u,w) = E^+(u,w) + u(1) + \\frac14 - \\int_0^1r \\d r \\frac{\\psi^2}{r^2} < \\infty.\n\\end{equation}\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{cor}\nFor the unrenormalized energy $I_\\lambda$ (cf.~eq.~\\eqref{eq:11}),\n\\[\n|\\log \\lambda|-C \\leq \\lambda^{-2}\\inf I_\\lambda \\leq|\\log \\lambda|+C\\,\\quad\\text{ for all }\\lambda\\in(0,R_0^{-1}).\n\\]\n\\end{cor}\n\\begin{proof}\nBy eq.~\\eqref{eq:covhe}, \n\\begin{equation}\n \\label{eq:3}\n\\hat E_\\lambda^1(\\hat u,\\hat w)=\\lambda^2 \\hat E^{\\lambda^{-1}}_1(\\hat u_\\lambda, \\hat w_\\lambda)\n\\quad \\mbox{where} \\quad \\hat u_\\lambda = \\lambda\\hat u(\\cdot\/\\lambda), \\quad \\hat w_\\lambda = \\hat\nw(\\cdot\/\\lambda)\\,. \n\\end{equation}\nUsing the definition of $\\hat E^R$, eq.~\\eqref{eq:13}, we get\n\\[\n\\inf \\hat E_\\lambda^1\\leq \\lambda^2 \\hat E^{\\lambda^{-1}}(0,\\psi)\\leq C\\lambda^2\n\\]\nwhich proves the upper bound since\n\\[\nI_\\lambda=\\hat E^1_\\lambda+\\lambda^2\\int_0^1\\psi(r\/\\lambda)^2\\d r\/r=\\hat\nE^1_\\lambda+\\lambda^2(C+|\\log \\lambda|)\\,.\n\\] \nThe lower bound follows from eq.~\\eqref{eq:3} since by \n \\eqref{eq:lowerER} we have \n \\begin{equation}\n \\hat E^{\\lambda^{-1}}_1(\\hat u_\\lambda, \\hat w_\\lambda)\n = E^{\\lambda^{-1}}(u_\\lambda, w_\\lambda) \\ge -C\n \\end{equation}\n for $\\lambda \\le 1\/ R_0$.\n\\end{proof}\n\n\n\\renewcommand{\\rho^{\\rm el.}_\\he}{\\rho^{\\rm el.}}\n\n\n\n\n\nNow we are in a position to prove the existence of minimizers for the\nrenormalized energy.\n\n\\begin{theorem}\n\\label{exist2}\nWe have $\\inf_{\\mathcal W} E \\in {\\mathbb R}$ and the functional $E$ attains its minimum in ${\\mathcal W}$.\nMoreover there exists a minimizer $(u,w)$ of $E$ which satisfies\n\\begin{equation} \\label{eq:convergencew2}\nw + \\psi \\ge 0 \\quad \\mbox{a.e.}\n\\end{equation}\n\\end{theorem}\n\n\n\n\\begin{proof} \nBy Lemma \\ref{cor:welldef}, $E$ is bounded from below.\nMoreover $E(0,0) < \\infty$. Thus $\\inf E \\in {\\mathbb R}$.\nLet $(u_j, w_j)$ be a minimizing sequence, i.e.,\n\\begin{equation}\nE(u_j, w_j) \\to \\inf_{\\mathcal W} E.\n\\end{equation}\nThe energy is $E$ does not change if we replace $\\hat w = w + \\psi$ by $|\\hat w|$. We may thus assume in addition that\n\\begin{equation}\nw_j + \\psi \\ge 0.\n\\end{equation}\nSince $\\sup_j E(u_j, w_j)$ is bounded we deduce from \n \\eqref{eq:lowerER} that \n\\begin{equation}\nE^+(u_j, w_j) \\le C \\quad \\forall j \\in {\\mathbb N}.\n\\end{equation}\nLet $0 < a < b < \\infty$. Then it follows directly from the formula for $E^+$\nthat $u_j$ and $w'_j$ are bounded in $L^2((a,b))$. By Lemma\n\\ref{lem:boundwg} the sequence $w_j$ is bounded in $L^\\infty$. Thus $w_j$ and $w_j^2$ are bounded in $L^2((a,b))$. \nSince $u'_j + 2w_j + w_j^2$\nis bounded in $L^2((a,b))$ it follows that $u'_j$ is bounded in $L^2((a,b))$. \nThus there exist a subsequence of $(u_j, w_j)$ which converges weakly in \n$W^{1,2}((a,b))$. We can apply this argument with $a = 1\/k$, $b = k$ for $k \\in {\\mathbb N}$, $k \\ge 2$\nand successively select subsequences. By a diagonalization argument there exists a single subsequence\n(still denoted by $(u_j, w_j)$) that converges weakly in $W^{1,2}_{loc}((0, \\infty))$:\n\\begin{equation}\n(u_j, w_j) \\rightharpoonup (u,w) \\quad \\mbox{in $W^{1,2}_{loc}((0, \\infty))$. }\n\\end{equation}\nBy the compact Sobolev embedding this implies\n\\begin{equation} \\label{eq:locunif}\n(u_j, w_j) \\to (u,w) \\quad \\mbox{locally uniformly in $(0, \\infty)$}\n\\end{equation}\nIn particular we have the weak convergences\n\\begin{equation}\n2 w_j + w_j^2 + u'_j \\rightharpoonup 2 w + w^2 + u \\quad \\mbox{in $L^2_{loc}((0, \\infty))$}\n\\end{equation}\nand\n\\begin{equation}\n\\hat w_j^2 - 1 + \\hat u'_j \\rightharpoonup \\hat w^2 - 1 + \\hat u \\quad \\mbox{in $L^2_{loc}((0, \\infty ))$,}\n\\end{equation}\nwhere $\\hat w_j = w_j + \\psi$ , $\\hat u_j = u_j + \\psi\/ 2r$, $\\hat w = w + \\psi$ , $\\hat u = u + \\psi\/ 2r$.\n\nWeak lower semicontinuity of the $L^2$ norm implies that for $0< a < 1 < b < \\infty$.\n\\begin{align} \n& \\int_a^1 r \\d r \\left[ (\\hat w^2 - 1 + \\hat u'^2)^2 + \\frac{\\hat u^2}{r^2} + \\frac{\\hat w^2}{r^2} + \\hat w'^2 \\right] {\\nonumber} \\\\\n\\le &\n\\liminf_{j \\to \\infty}\n\\int_a^1 r \\d r \\left[ (\\hat w_j^2 - 1 + \\hat u_j'^2)^2 + \\frac{\\hat u_j^2}{r^2} + \\frac{\\hat w_j^2}{r^2} + \\hat w_j'^2 \\right] \n \\label{eq:lsc1}\n\\end{align}\nand\n\\begin{align} \n& \\int_1^b r \\d r \\left[ (2 w + w^2 + u'^2)^2 + \\frac{ u^2}{r^2} + w'^2 \\right] {\\nonumber} \\\\\n\\le &\n\\liminf_{j \\to \\infty}\n\\int_a^1 r \\d r \\left[ (2 w_j + w_j^2 + u_j'^2)^2 + \\frac{u_j^2}{r^2} + w_j'^2 \\right].\n\\label{eq:lsc2}\n\\end{align}\nAdding these two inequalities we get\n\\begin{equation} \\label{eq:lsc3}\nE^+(u,w; [a,b]) \\le \\liminf_{j \\to \\infty} E^+(u_j, w_j),\n\\end{equation}\nwhere $E^+(u,w;[a,b])$ is defined as the sum of the terms on the left hand side of\n\\eqref{eq:lsc1} and \\eqref{eq:lsc2}. Finally the monotone convergence theorem \nimplies that we can take the limit $a \\to 0$ and $b \\to \\infty$ in \\eqref{eq:lsc3}\nand deduce\n\\begin{equation}\nE^+(u,w) \\le \\liminf_{j \\to \\infty} E^+(u_j, w_j). \n\\end{equation}\nThis in particular implies that $E^+(u,w) < \\infty$ and thus $(u,w) \\in {\\mathcal W}$. \n\nWe now use the relation \\eqref{eq:identE+} between $E^+$ and $E$\nand the fact that $u_j(1) \\to u(1)$ (see \\eqref{eq:locunif})\nto deduce that\n\\begin{equation}\nE(u,w) \\le \\liminf_{j \\to \\infty} E(u_j, w_j) = \\inf_W E. \n\\end{equation}\nThus $(u,w)$ minimizes $E$ in ${\\mathcal W}$.\n\nFinally the condition $w_j + \\psi \\ge 0$ implies that $w + \\psi \\ge 0$.\n\\end{proof}\n\n\n\n\n\\begin{rem}[Self-penetration of solutions]\nThe von K\\'arm\\'an model displays a pathology at the origin for the situation\nwe want to model. Namely, the solutions we have found show interpenetration of\nmatter. Consider again the Euler-Lagrange equation obtained by variation of\n$\\hat u$,\n\\[\n\\left(r\\left(\\hat w^2-1+\\hat u'\\right)\\right)'=\\frac{\\hat u}{r}\\,.\n\\]\nSince $\\hat w\\to 0$ for $r\\to 0$, the qualitative behaviour of solutions $u$ near the\norigin is the same as the one of solutions of the linear equation\n\\[\n\\left(r\\left(\\hat u'-1\\right)\\right)'=\\frac{\\hat u}{r}\\,.\n\\]\nThe solutions of this latter equation are given by\n\\[\n\\frac{1}{2}r\\log r+C_1r+C_2r^{-1}.\n\\]\nThe integration constant $C_2$ has to be set to zero to fulfill the boundary\ncondition $\\hat u(0)=0$. Going back to eq.~\\eqref{uhatvardef}, we see that the value of $U$ will be\nnegative in some punctured neighbourhood of the origin and we have\nself-penetration of the solution (somewhere in the region $r\\sim\n\\exp\\left(-\\varepsilon^{-2}\\right)$). We expect that this pathology could be cured by\nincluding nonlinear or higher order terms in $u$ in our model. We\nrefrain from doing so, since the main aspect of this work is the analysis of\nthe solutions away from the origin.\n\\end{rem}\n\n\n\n\n\n\\section{Decay properties}\n\\label{decay1}\n\n\n\n\n\n\n\nWe now turn our interest to the decay properties of minimizers. We first show\nthat $\\lim_{r \\to \\infty} w(r) = 0$ (if $w + \\psi \\ge 0$).\nThis will level the field for an application of stable manifold theory, by which we prove that $u$ and $w$ decay \nlike a stretched exponential $\\exp(- c \\sqrt{r})$.\n\n\n\n \n \n \n\n\\begin{lemma} \n\\label{cor:ww+2estim} Assume that $(u,w)\\in{\\mathcal W}$\n with $E^+(u,w)<\\infty$ and $w + \\psi \\ge 0$.\nThen\n\\begin{equation} \\label{eq:convergencew}\n\\lim_{R \\to \\infty} w(R) = 0\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nIt follows from Lemma \\ref{lem:boundwg} and Lemma \\ref{lem:estsupH1a}\nthat\n\\begin{equation}\n\\lim_{r \\to \\infty} 2 w(r) + w^2(r) = 0.\n\\end{equation}\n\nNow $ w(r) \\ge -1$ and the function $F(s) = 2s + s^2 =(s+1)^2 - 1$ \nhas a continuous inverse on $[-1, \\infty)$. Thus $\\lim_{r \\to \\infty} w(r) =0$.\n\\end{proof}\n\n\\bigskip \n\nNow we show that minimizers actually have decay as $\\exp(- c \\sqrt r)$ at infinity.\nWe will use the following standard tool from stable manifold theory:\n\\begin{theorem}[\\cite{MR0069338}]\nLet $s_0\\in {\\mathbb R}$, $A\\in {\\mathbb R}^{n\\times n}$ a matrix with $k\\leq n$ \neigenvalues with negative real part and $n-k$ eigenvalues with positive real\npart , $F:{\\mathbb R}^n\\times [ s_0,\\infty)\\to{\\mathbb R}^n$\nwith the property that for every $\\varepsilon>0$, there exist $S\\in [ s_0,\\infty)$ and\n$\\delta_0>0$ such that\n\\[\n\\left|F(x,s)-F(\\bar x,s)\\right|\\leq \\varepsilon |x-\\bar x|\n\\]\nwhenever $|x-\\bar x|\\leq \\delta_0$ and $s\\geq S$.\nThen there exist $\\delta>0, \\bar s>S$ such that for $|p|<\\delta$ and\n$\\bar s>S$, there exists a $k$-dimensional submanifold $\\bar M(\\bar s)$ of ${\\mathbb R}^n$ containing\nthe origin such that the initial value problem\n\\begin{equation}\n\\frac{\\d}{\\d s}x(s)=Ax(s)+F(x,s)\\,,\\quad x(\\bar s)=p\\label{1stsys}\n\\end{equation}\nhas a solution $x:[\\bar s,\\infty)\\to{\\mathbb R}^n$ for any $p\\in \\bar M(\\bar s)$, and the property \n\\[\n|x(s)|=o(\\exp(-\\sigma s)) \\text{ as }s\\to\\infty\n\\] \nfor any $\\sigma>0$ such that the absolute values of the real parts of the\neigenvalues of $A$ are all bigger than $\\sigma$. Furthermore there exists\n$\\eta>0$ independent of $\\bar s$ such that, if $p\\not\\in \\bar M(\\bar s)$, then \n\\begin{equation*}\n\\|x\\|_{L^\\infty(s_0,\\infty)}>\\eta\\,.\n\\end{equation*}\n\\label{stabmanif}\n\\end{theorem}\n\n\n\n\\begin{prop}\nFor any minimizer $(u,w)$ of $E$ with $w\\geq-\\psi$,\n\\begin{align*}\n\\left|\\left(u(r),u'(r),w(r),w'(r)\\right)\\right|=o\\left(\\exp(-\\sigma \\sqrt{r})\\right)\n\\end{align*}\nas $r\\to\\infty$ for any $\\sigma<2$.\n\\label{decayprop}\n\\end{prop}\n\\begin{proof}\n\n\n\n\nSince $E$ and $E^+$ only differ by a boundary term they lead to the same Euler-Lagrange equations. \nThus for $r > 1$ the Euler Lagrange equations for $(u,w)$ are the same as the Euler-Lagrange equations for the functional\n\\begin{equation}\n\\int_1^\\infty r \\d r \\left [(2 w(r) + w^2(r) + u'(r))^2 + \\frac{u^2(r)}{r^2} + w'^2(r) \\right].\n\\end{equation}\n\nIt turns out that these EL equations are not of the form required in Theorem \\ref{stabmanif} since the linear\npart is not autonomous (up to a contribution which decays as $r \\to \\infty$). We will make a change of variables\nto bring the EL equations in a suitable form. To motivate that change of variables it suffices to focus\non the linearization, i.e., we may neglect the terms $w^2$ in the energy functional (as we already know $w \\to 0$\nat $\\infty$). The linearized equations are\n\\begin{align}\n(r (2 w + u'))' & = \\frac{u}{r} \\\\\n2 r (2 w + u') & = (r w')' = r w'' + w'\n\\end{align}\nDifferentiation of the second equation and use of the first yields $r w''' + 2\nw'' = 2 u\/ r$. Thus $2 u' = r^2 w^{(4)} + 4 r w^{(3)} + 2 w''$ and inserting this into the second equation we\nget the linearized fourth order equation for $w$\n\\begin{equation}\n\\frac{1}{2}(r^2 w^{(4)} + 4 r w^{(3)} + 2 w'') + 2w = \\frac{1}{2} w'' + \\frac{1}{2r} w'\n\\end{equation}\nNow we make the change of variables $w(r) = \\underline{w}(r^\\alpha)$. Then\n\\begin{equation}\nw' = \\alpha r^{\\alpha - 1} \\underline{w}', \\quad w^{(k)} = \\alpha^k r^{k (\\alpha - 1)} \\underline{w}^{(k)} + \\mbox{lower order derivatives}.\n\\end{equation}\nThis suggests to choose $\\alpha = \\frac12$ so that the leading order term in the linear equation becomes \n$\\frac{1}{32} \\underline{w}^{(4)} + 2 w = 0$. We will now derive the EL equations in the new variables in \ndetail. It is most convenient to first transform the functional.\n\nWe make the change of variables\n\\begin{equation}\nw(r) = \\underline{w}(\\sqrt{r}), \\quad u(r) = \\underline{u}(\\sqrt{r}), \\quad s = \\sqrt{r}, \\quad r = s^2.\n\\end{equation}\nThen \n\\begin{equation}\nw'(r) = \\frac{1}{2\\sqrt{r}} \\underline{w}'(\\sqrt{r}) = \\frac{1}{2s} \\underline{w}'(s), \\quad u'(r) = \\frac{1}{2s} \\underline{u}'(s).\n\\end{equation}\nThus for $(u,w) \\in {\\mathcal W}$\n\\begin{align} \\label{eq:trafo_energy}\nE^+(u,w) \\ge &\\int_1^\\infty r \\d r \\left [(2 w(r) + w^2(r) + u'(r))^2 + \\frac{u^2(r)}{r^2} + w'^2(r) \\right] {\\nonumber} \\\\\n= & \\int_1^\\infty 2 s^3 \\d s \\left[ (2\\underline{w}(s) + \\underline{w}^2(s) + \\frac{1}{2s} \\underline{u}'(s) )^2 + \\frac{\\underline{u}^2(s)}{s^4} + (\\frac{1}{2s} \\underline{w}'(s))^2 \\right] {\\nonumber} \\\\\n = &2 \\int_1^\\infty \\d s \n\\left[ s \\left( s (2 \\underline{w} + \\underline{w}^2) + \\frac12 \\underline{u}' \\right)^2\n + \\frac{\\underline{u}^2}{s} + \\frac14 s \\underline{w}'^2 \\right].\n\\end{align}\n\n\nFrom \\eqref{eq:trafo_energy} we easily obtain the Euler-Lagrange equations (for $s > 1$)\n\\begin{align}\n2 s^2 (1 + \\underline{w}) \\left(s (2 \\underline{w} + \\underline{w}^2) + \\frac12 \\underline{u}'\\right) &= \\frac14 (s \\underline{w}')' \\label{eq:ELs1}\\\\\n\\frac12 \\left[s \\left(s (2 \\underline{w} + \\underline{w}^2) + \\frac12 \\underline{u}' \\right)\\right]' = \\frac{ \\underline{u}}{s}. \\label{eq:ELs2}\n\\end{align}\nThese equations first hold in the weak sense, but by standard elliptic regularity we get\n$\\underline{w} \\in W^{2,2}_{loc}$ and\n $\\underline{u} \\in W^{2,2}_{loc}$ and the equations hold a.e. By induction one easily\nsees that $(\\underline{u}, \\underline{w}) \\in W^{k,2}_{loc}$ \nfor all $k$ and hence $(\\underline{u}, \\underline{w}) \\in C^\\infty$. \n\n\n\nWe choose $s_0>1$ large enough so that \n\\begin{equation}\n\\frac{1}{2}<1+\\underline{w}(s)<\\frac{3}{2} \\text{ for }s\\geq s_0\\,,\\label{lem1appls}\n\\end{equation} \nwhich is\npossible by \\eqref{eq:convergencew}.\n In this region we may divide eq.~\\eqref{eq:ELs1} by $2 s(1+w)$\nand get\n\\begin{equation} \\label{eq:ELw2nd}\ns \\left(s (2 \\underline{w} + \\underline{w}^2) + \\frac12 \\underline{u}' \\right) = \\frac{1}{8s} \\frac{1}{(1+\\underline{w})} (s \\underline{w}')'.\n\\end{equation}\nThen \\eqref{eq:ELs2} becomes\n\\begin{equation} \\label{eq:ELus}\n\\underline{u}(s) = \\frac{1}{2} s \\left[ \\frac{1}{8s (1+\\underline{w})} (s \\underline{w}')'\\right]'.\n\\end{equation}\nInserting this into \\eqref{eq:ELs1} we get a fourth order equation for $\\underline{w}$\n\\begin{equation}\n(1+\\underline{w}) \\left[ s(2\\underline{w} + \\underline{w}^2) + \\frac{1}{4} \\left(s \\left(\\frac{(s \\underline{w}')'}{8s (1+\\underline{w})} \\right)'\\right)' \\right]\n= \\frac{1}{8s^2} (sw')'.\n\\end{equation}\n\nThis can be rewritten as\n\\begin{equation}\n\\underline{w}^{(4)}(s)=-64 \\underline{w}(s)+g(x(s),s)+h(x(s),s)\\label{4thmod}\n\\end{equation}\nwhere $x(s)=(\\underline{w}^{(3)}(s),\\underline{w}''(s),\\underline{w}'(s),\\underline{w}(s))$, and $g:{\\mathbb R}^4\\times {\\mathbb R}^+\\to {\\mathbb R}$ contains the nonlinear terms in $x$, $h:{\\mathbb R}^4\\times {\\mathbb R}^+\\to {\\mathbb R}$ the linear ones with coefficients $O(s^{-1})$. More precisely,\n\\begin{align}\ng(x,s)=&\\frac{1}{1+\\underline{w}}\\left(2\\underline{w}'\\underline{w}^{(3)}+\\frac{4}{s}\\underline{w}''\\underline{w}'+\\underline{w}''^2-\\frac{1}{s^2}\\underline{w}'^2-\\frac{1}{1+\\underline{w}}\\left(2\\underline{w}'^2\\underline{w}''+\\frac{2}{s}\\underline{w}'^3\\right)\\right){\\nonumber}\\\\\n&-96\\underline{w}^2-32\\underline{w}^3\\label{gdef}\\\\\nh(x,s)=&-\\frac{2}{s}\\underline{w}^{(3)}+\\frac{5}{s^2}\\underline{w}^{(2)}+\\frac{3}{s^3}\\underline{w}'\\label{hdef}\n\\end{align}\nIn particular, for $f:=g+h$, and given $\\varepsilon>0$, there exist $S\\geq s_0$, $\\delta>0$ such that\n\\begin{equation*}\n\\left|f(x,s)-f(\\bar x,s)\\right|\\leq \\varepsilon |x-\\bar x|\n\\end{equation*}\nwhenever $|x-\\bar x|<\\delta$ and $s>S$. Additionally, we have $f(0,s)=0$. Now we may rewrite eq.~\\eqref{4thmod} as a system of first order equations, \n\\begin{equation*}\n\\frac{\\d}{\\d s}x(s)^T=Ax(s)^T+F(x,s)\\,\n\\label{stabmanifappl}\n\\end{equation*}\nwhere $F:{\\mathbb R}^4\\times [s_0,\\infty) \\to {\\mathbb R}^4$ is given by $F(x,s)=(f(x,s),0,0,0)^T$ and\n\\begin{equation*}\nA=\\left(\\begin{array}{cc} 0 & -64\\\\{\\rm Id}_{3\\times 3} & 0\\end{array}\\right)\n\\end{equation*}\nThe eigenvalues of $A$ are $2(\\pm 1\\pm i)$, i.e., $A$ has two\neigenvalues with positive real part and two with negative real part. \n\n\n\nWe already know that $\\lim_{s \\to \\infty}\\underline{w}(s) = \\lim_{s \\to \\infty} w(s^2) = 0$. We will now show that\n\\begin{equation} \\label{eq:fulldecay}\n\\lim_{s\\to\\infty}\\underline{w}^{(3)}(s)=\\lim_{s\\to\\infty}\\underline{w}''(s)=\\lim_{s\\to\\infty}\\underline{w}'(s)=0\\,.\n\\end{equation}\n\nIt follows that $\\lim_{s \\to \\infty}x(s) = 0$. From Theorem \\ref{stabmanif}, it follows\nthat there exists $\\bar s$ such that $x(\\bar s)\\in \\bar M(\\bar s)$, and hence $|x(s)|=o(\\exp(-\\sigma s))$ for $\\sigma<2$. \nIt remains to prove \\eqref{eq:fulldecay}.\n\n\nWe first show $\\underline{w}'' \\in L^2((s_0, \\infty); \\d s\/s)$. From \\eqref{eq:ELs1} we get\n\\begin{equation} \n| \\underline{w}'' + s^{-1} \\underline{w}'| \\le C s \\left| s (2 \\underline{w} + \\underline{w}^2) + \\frac12 \\underline{u}' \\right|.\n\\end{equation}\nTherefore\n\\begin{align*} \\label{eq:L2second}\n \\int_{s_0}^\\infty \\frac{\\d s}{s} \\underline{w}''^2\\ \\leq \n2 \\int_{s_0}^\\infty \\frac{\\d s}{s} \\left[ s^2 \\left( s (2 \\underline{w} + \\underline{w}^2) + \\frac12 \\underline{u}' \\right)^2\n+ s^{-2} \\underline{w}'^2 \\right] \n < \\infty\n\\end{align*}\nby \\eqref{eq:trafo_energy}.\nAgain by \\eqref{eq:trafo_energy} we have $\\underline{w}' \\in L^2((s_0, \\infty); s \\d s)$. Thus \nLemma \\ref{lem:estsupH1a} yields\n\\begin{equation} \\label{eq:convergencew1}\n\\lim_{s \\to \\infty} \\underline{w}'(s) = 0.\n\\end{equation}\n\n\n\nNext we derive a weighted $L^2$ estimate for the third derivative $\\underline{w}^{(3)}$.\nIt follows from \\eqref{eq:ELus} that\n\\begin{equation*}\n\\frac{\\underline{u}}{s}=\\frac{1}{16(1+\\underline{w})}\\left(\\underline{w}^{(3)}+\\frac{\\underline{w}''}{s}-\\frac{\\underline{w}'}{s^2}-\\frac{\\underline{w}'\\underline{w}''}{1+\\underline{w}}-\\frac{\\underline{w}'^2}{s(1+\\underline{w})}\\right)\\,,\n\\end{equation*}\n\n\nwhich implies (using\nthe convergence of $w$ and $w'$)\n\\begin{equation}\n\\left|\\underline{w}^{(3)}(s)\\right|^2\\leq C\\left(\\left|\\frac{\\underline{u}(s)}{s}\\right|^2+\\left|\\frac{\\underline{w}''}{s}\\right|^2+\\left|\\frac{\\underline{w}'}{s^2}\\right|^2+ \\underline{w}''^2 \\underline{w}'^2+\\left|\\frac{\\underline{w}'^2}{s}\\right|^2\\right)\\,.\\label{w3rough}\n\\end{equation}\nWith the possible exception of $\\underline{w}''^2 \\underline{w}'^2$ all terms on the right hand side are integrable against $s \\d s$. Thus we get for $s_1 > s_0$\n\\begin{equation}\n\\int_{s_0}^{s_1} s \\d s \\left| \\underline{w}^{(3)}\\right|^2 \\le C (1 + \\sup_{[s_0, s_1]} |\\underline{w}''|^2),\n\\label{eq:1}\n\\end{equation}\nwhere $C$ is controlled by $E^+(u,w)$ and in particular independent of $s_1$. \nNow we get as usual\n\\begin{align}\n&\\sup_{[s_0,S1]} |\\underline{w}''|^2 - \\inf_{[s_0,s_1]} |\\underline{w}''|^2 \\le 2 \\int_{s_0}^{s_1} \\d s | \\underline{w}'' \\underline{w}^{(3)}| {\\nonumber} \\\\\n\\le & 2 \\| \\underline{w}'' \\|_{L^2((s_0, \\infty); \\d s\/ s)} C^{1\/2} ( 1 + \\sup_{[s_0, s_1]} |\\underline{w}''|^2)^{1\/2} {\\nonumber} \\\\\n\\le & 4 C \\| \\underline{w}'' \\|_{L^2((s_0, \\infty); \\d s\/ s)}^2 + \\frac14 (1 + \\sup_{[s_0, s_1]} |\\underline{w}''|^2)\n\\label{eq:boundsupw2}\n\\end{align}\nMoreover \n\\begin{equation}\n\\inf_{[s_0, s_1]} \\underline{w}''^2 \\le \\frac{1}{\\ln s_1\/ s_0} \\int_{s_0}^{s_1} \\frac{\\d s}{s} \\underline{w}''^2 \\le \\frac{C}{\\ln s_1\/ s_0}.\n\\end{equation}\nThus absorbing the term $ \\frac14 (1 + \\sup_{[s_0, s_1]} |\\underline{w}''|^2)$ into the left hand side of the \n\\eqref{eq:boundsupw2} and taking $s_1 \\to \\infty$ we get\n\\begin{equation}\n\\frac34 \\sup_{[s_0,\\infty]} \\underline{w}''^2 \\le 4 C \\| \\underline{w}'' \\|_{L^2((s_0, \\infty); \\d s\/ s)}^2 < \\infty\n\\end{equation}\nand by \\eqref{eq:1}\n\\begin{equation}\n\\int_{s_0}^{\\infty} s \\d s \\left| \\underline{w}^{(3)} \\right|^2 < \\infty.\n\\end{equation}\nSince $\\underline{w}'' \\in L^2((s_0, \\infty); \\d s\/s)$ it follows that \n\\begin{equation}\n\\lim_{s \\to \\infty} \\underline{w}''(s) = 0.\n\\end{equation}\n\n\n\n\nFinally we claim that $\\underline{w}^{(4)} \\in L^2 ((s_0, \\infty); \\d s\/s )$. Indeed from the previous \n$L^2$ bounds we see immediately that $h \\in L^2 ((s_0, \\infty); \\d s\/s) $. Moreover\nthe convergence of $\\underline{w}'$ and $\\underline{w}''$ imply that\n$g(x(s),s) \\le C (|w^{(3)}| + |\\underline{w}'| + |\\underline{w}''| + |\\underline{w}|)$. By \\eqref{eq:gL2}\n we have\n\\begin{equation}\n\\int_{s_0}^\\infty \\frac{\\d s}{s} (2 \\underline{w} + \\underline{w}^2)^2 = \n\\frac12 \\int_{s_0^2}^\\infty \\frac{\\d r}{r} (2 w + w^2)^2 < \\infty.\n\\end{equation}\nSince $\\frac52 < 2 +\\underline{w}(s) < \\frac72$, we get \n$\\underline{w} \\in L^2((s_0, \\infty); \\d s\/s)$. Together with the weighted $L^2$ estimates\nfor $\\underline{w}^{(i)}$ for $i=1,2,3$ we get $\\underline{w}^{(4)} \\in L^2 ((s_0, \\infty); \\d s\/s )$.\nIn combination with the estimate $\\underline{w}^{(3)} \\in L^2((s_0, \\infty); s \\d s)$ this implies that\n\\begin{equation}\n\\lim_{s \\to \\infty} \\underline{w}^{(3)}(s) = 0.\n\\end{equation}\n\n\nThus $\\underline{w}^{(i)}=o(\\exp(-\\sigma\ns))$ as $s\\to \\infty$ for $i=0,1,2,3$ and all $\\sigma<2$.\nThis implies\n$u(r),u'(r),w(r),w'(r)=o(\\exp(-\\sigma\\sqrt{r}))$ as $r\\to \\infty$ for all $\\sigma<2$.\n\\end{proof}\n\n\\bigskip\n\n\\begin{proof}[Proof of Theorem \\ref{mainthm}]\nFor $\\lambda=1$, this follows from Theorem \\ref{exist2} and Proposition\n\\ref{decayprop}. For $\\lambda\\neq 1$, we recall that by \\eqref{eq:covhe}, we have $\\hat E_\\lambda^R(\\hat u,\\hat\nw)=\\lambda^2 \\hat E_1^{R\/\\lambda^2}(\\hat u,\\hat w)$ and hence \n\\[\n\\hat E_\\lambda(\\hat u,\\hat\nw)=\\lambda^2\\lim_{R\\to\\infty} \\hat E_1^{R\/\\lambda}(\\hat u_\\lambda,\\hat w_\\lambda)=\\lambda^2\nE(u_\\lambda,w_\\lambda)\n\\]\nfor all $(\\hat u,\\hat w)\\in {\\mathcal W}$, where we used the notation $\\hat\nu_\\lambda=\\lambda^{-1}\\hat u(\\lambda\\cdot)$, $\\hat w_\\lambda=\\hat w(\\lambda\\cdot)$, $u_\\lambda(r)=\\hat\nu_\\lambda(r)-\\psi( r)\/(2r)$, $w_\\lambda=\\hat w_\\lambda-\\psi$ and Lemma\n\\ref{cor:welldef}. Now all statements follow from the case $\\lambda=1$ already treated.\n\\end{proof}\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIn spite of the primal importance of discovering causal relations in science, the statistical analysis of empirical data has historically shied away from causality. \nOnly releatively recently has a rigorous theory of causality emerged\n(see, for instance, \\cite{Pearlbook,Spirtesbook}), showing that empirical data indeed can contain information about causation rather than mere correlation. Since then, causal inference has quickly become influential. Examples range from applications to the inference of genetic \\cite{friedman2004inferring} and social networks \\cite{Steeg2011}, to a better understanding of the role of causality within quantum physics \\cite{Leifer2013,Fritz2012,Fritz2014,Henson2014,Chaves2015a,Piennar2014,ried2015quantum,Costa2016,horsman2016can}.\n\nTo formalize causal mechanisms it has become popular to use directed acyclic graphs (DAGs) where nodes denote random variables and directed edges (arrows) account for their causal relations. \nCentral problems within this context include \\emph{inference} or \\emph{model selection}: `Given samples from a number of observable variables, which DAG should we associate with them?', as well as \\emph{hypothesis testing}: `Can the observed data be explained in terms of an assumed DAG?'\nHere, we concentrate on the latter problem and \npropose a novel solution based on the covariances that a given causal structure gives rise to. To understand the relevance and applicability of this method it is useful to summarize the difficulties that we typically face when approaching such problems.\n\nThe most common method to infer the set of possible DAGs compatible with empirical observations is based on the Markov condition and the faithfulness assumption \\cite{Pearlbook,Spirtesbook}. Under these conditions, and in the case where all variables composing a given DAG can be assumed to be empirically accessible, the conditional statistical independencies implied by the graph contain all the information required to test for the compatibility of some data with the causal structure. However, for a variety of practical and fundamental reasons, we do quite generally face causal discovery in the presence of latent (hidden) variables, that is, variables that may play an important role in the causal model, but nonetheless cannot be accessed empirically. In this case we have to characterize the set of marginal probability distributions that a given DAG can give rise to. Unfortunately, as is widely recognized, generic causal models with latent variables impose highly non-trivial constraints on the possible correlations compatible with it \\cite{Pitowsky1991,Pearl1995,Geiger1999,Bonet2001,Garcia2005,Kang2006,Kang2007,evans2012graphical,lee2015causal,Chaves2016,Rosset2016,wolfe2016inflation}. Although the marginal compatibility in principle can be completely characterized in terms of semi-algebraic sets \\cite{Geiger1999}, it appears that the resulting tests in practice are computationally intractable beyond a few variables \\cite{Garcia2005,lee2015causal}.\n\n\n\nOne possible approach to deal with the apparent intractability is to consider relaxations of the original problem, that is, to design tests that define incomplete lists of constraints (outer approximations) to the set of compatible distributions \\cite{Bonet2001,Garcia2005,Kang2006,Kang2007,moritz2014discriminating,Chaves2014,Chaves2014b}. For instance, this approach has previously been considered in \\cite{Chaves2014,Chaves2014b,steudel2015information,weilenmann2016non}, with tests based on entropic information theoretic inequalities; an idea originally conceived to tackle foundational questions in quantum mechanics \\cite{Braunstein1988,Cerf1997,Chaves2012,FritzChaves2013,Chaves2013entropic,Chaves2015entropy,Chaves2016entropic}. Here we consider a relaxation in a similar spirit, but based on covariances rather than entropies. \n\nBeyond dealing with potential computational intractabilities, an additional benefit with a relaxation based on covariances is that it at most involves bipartite marginals, and it seems reasonable to expect that this would be less data-intensive than methods based on the full multivariate distribution of the observables. \n\n\n\n\\begin{figure}[h!]\n \\includegraphics[width= 11cm]{Figure1.pdf} \n\\caption{\\label{FigBipartite} {\\bf Bipartite DAGs.} In this investigation we focus on the class of causal models where all correlations among the observables are due to a collection of independent latent variables. This setting can be described in terms of DAGs that are bipartite, where the latter means that all edges are directed from latent variables ($L_1,L_2,L_3$) to the observables ($O_1,O_2,O_3,O_4,O_5$), and where there are no edges within each of these subsets. \n}\n\\end{figure}\n\n\\subsection{Main assumptions and results}\n\nWe focus on a particular class of latent causal structures, where we assume that there are no direct causal influences between the observables, but only from latent variables to observables (see figure \\ref{FigBipartite}). Hence, all correlations among the observables are due to the latent variables. This setting can be described by the class of DAGs where all edges are directed from latent vertices to observable vertices, but no edges within these two groups (see figure \\ref{FigBipartite}). In other words, we consider the case of DAGs that are bipartite, with the coloring `observable' and `latent'.\nAlternatively, this can be described in terms of hypergraphs, where each independent latent cause is associated with a hyperedge consisting of the affected observable vertices (see e.g.~\\cite{evans2015graphs}). \n\n\nThis class of graphs has previously been considered in the context of marginalization of Bayesian networks \\cite{moritz2014discriminating,steudel2015information,evans2015graphs}. They moreover provide examples of the difficulties that arise when characterizing latent structures \\cite{Branciard2010,Fritz2012,Branciard2012,tavakoli2014nonlocal,Chaves2014,Chaves2014b}, where standard techniques based on the use of conditional independencies even can yield erroneous results (for a discussion, see e.g.~\\cite{Spekkens2015}). \n This type of latent structures furthermore emerges in the context of Bell's theorem \\cite{Bell1964}, as well as in recent generalizations \\cite{Branciard2010,Fritz2012,Branciard2012,tavakoli2014nonlocal,Chaves2016,Rosset2016,saunders2016experimental,carvacho2016experimental}, where they can be used to show that quantum correlations between distant observers --thus without direct causal influences between them-- are incompatible with our most basic notions of cause and effect.\n\n\nIrrespective of the nature of the observables (categorical or continuous) we are free to assign vectors to each possible outcome of the observables. Our main result is to show that each bipartite DAG implies a particular decomposition of the resulting covariance matrix into positive semidefinite components. Hence, we can test whether the observed covariance matrix is compatible with a hypothetical bipartite DAG by checking whether it satisfies the corresponding positive semidefinite decomposition, and we will in the following somewhat colloquially refer to this as the `semidefinite test'. \nThe semidefinite test can thus be phrased as a semidefinite membership problem, which in turn can be solved via semidefinite programming. The latter is known to be computationally efficient from a theoretical point of view, and has a good track record concerning algorithms that are efficient also in practice (see discussions in \\cite{vandenberghe1996semidefinite}). \n\n\\subsection{Structure of the paper} \nIn section \\ref{SecSemidefDec} we derive a general decomposition of covariance matrices, which forms the basis of our semidefinite test. In section \\ref{SecObsLat} we rephrase this general result to fit with the particular structure of observables and latent variables that we employ, and in section \\ref{SecDecBipartDAGs} we derive the main result, namely that every bipartite DAG implies a particular semidefinite decomposition of the observable covariance matrix. Section \\ref{SecConverse} focuses on the converse, namely that every covariance matrix that satisfies the decomposition of a given bipartite DAG can be realized by a corresponding causal model. Section \\ref{SecOperatorInequalities} relates the semidefinite decomposition to previous types of operator inequalities introduced in \\cite{VonPrillwitz15MasterThesis}.\nTo obtain a covariance matrix we may be required to assign vectors to the outcomes of the random variables, and section \\ref{SecUniversalFeatureMaps} discusses the dependence of the semidefinite test on this assignment. \nIn section \\ref{SecMonotonicity} we briefly discuss the fact that the compatibility with a given bipartite DAG is not affected if the observables are processed locally, and that the semidefinite test respects this basic property under suitable conditions. \nSection \\ref{SecMonotoneFamily} considers a specific class of distributions where it is possible to analytically determine the conditions for a semidefinite decomposition. This class of distribution does in section \\ref{SecComparison} serve as a testbed for comparisons with the above mentioned entropic tests. We conclude with a summary and outlook in section \\ref{SecSummaryOutlook}.\n\n \n\\section{\\label{SecSemidefDec}Semidefinite decomposition of covariance matrices}\nIn this section we develop the basic structure that forms the core of the semidefinite test. In essence it is obtained via a repeated application of a law of total variance for covariance matrices.\n\nFor a vector-valued random variable $Y$, in a real or complex inner product space $\\mathcal{V}$, we define the covariance matrix of $Y$ as \n\\begin{equation}\n\\mathrm{Cov}(Y) := E\\Big(\\big(Y- E(Y)\\big) \\big(Y- E(Y)\\big)^{\\dagger}\\Big) = E(YY^{\\dagger})-E(Y)E(Y)^{\\dagger},\n\\end{equation}\n where $E(Y)$ denotes the expectation of $Y$ and $\\dagger$ denotes the transposition if the underlying vector space is real, and the Hermitian conjugation if the space is complex. \nOne should note that $E(Y)^{\\dagger} = E(Y^{\\dagger})$. We also define the cross-correlation for a pair of vector-valued variables $Y',Y$ (not necessarily belonging to the same vector space)\n \\begin{equation}\n \\label{crosscorrelation}\n \\mathrm{Cov}(Y',Y) := E(Y'Y^{\\dagger})-E(Y')E(Y)^{\\dagger},\n\\end{equation}\nwhere $\\mathrm{Cov}(Y,Y) = \\mathrm{Cov}(Y)$.\n For a pair of random variables $X,Y$ we denote the expectation of $Y$ conditioned on $X$ as $E(Y|X)$.\nVia the conditional expectation we can also define the conditional covariance matrix \n\\begin{equation}\n\\begin{split}\n\\mathrm{Cov}(Y|X) := & E\\Big(\\big(Y- E(Y|X)\\big) \\big(Y- E(Y|X)\\big)^{\\dagger} \\Big|X \\Big) = E(YY^{\\dagger}|X)-E(Y|X)E(Y|X)^{\\dagger}.\n\\end{split}\n\\end{equation}\nIn a similar manner we can also obtain a conditional cross-correlation between two random vectors $Y',Y$\n\\begin{equation}\n\\begin{split}\n \\mathrm{Cov}(Y',Y|X) := & E\\Big(\\big(Y'- E(Y'|X)\\big) \\big(Y- E(Y|X)\\big)^{\\dagger} \\Big|X \\Big) = E(Y'Y^{\\dagger}|X)-E(Y'|X)E(Y|X)^{\\dagger}.\n\\end{split}\n\\end{equation}\n\nThe starting point for our derivations is the law of total expectation\n\\begin{equation}\n\\label{lawoftotalexpectation}\nE(Y) = E\\big(E(Y|X)\\big),\n\\end{equation}\n where the `outer' expectation corresponds to the averaging over the random variable $E(Y|X)$. The law of total expectation can be iterated, such that for three random variables $Y,X,Z$, we have a law of total conditional expectation\n\\begin{equation}\n\\label{lawofconditionaltotalexpectation}\nE(Y|Z) = E\\Big(E(Y|X,Z)\\Big|Z\\Big),\n\\end{equation}\nand thus $E(Y) = E\\big(E(Y|Z)\\big) = E\\Big(E\\big(E(Y|X,Z)\\big|Z\\big)\\Big)$.\n\nFrom the law of total expectation (\\ref{lawoftotalexpectation}) one can obtain a covariance-matrix version of the law of total variance\n\\begin{equation}\n\\label{lawoftotalcovariance}\n\\mathrm{Cov}(Y) = \\mathrm{Cov}\\big( E(Y|Z) \\big) + E\\big( \\mathrm{Cov}(Y|Z) \\big),\n\\end{equation}\nwhich can be confirmed by expanding the two sides of the above equality and applying (\\ref{lawoftotalexpectation}).\n\n\n\nFor three random variables $Y,W,Z$ a conditional version of the law of total covariance reads\n\\begin{equation}\n\\label{lawofconditionaltotalcovariance}\n\\begin{split}\n\\mathrm{Cov}(Y|Z) = \\mathrm{Cov}\\Big( E(Y|W,Z)\\Big|Z\\Big) + E\\Big( \\mathrm{Cov}(Y|W,Z)\\Big|Z\\Big),\n\\end{split}\n\\end{equation}\nwhich can be obtained by expanding the right hand side and applying the law of total conditional expectation (\\ref{lawofconditionaltotalexpectation}).\n\n\n\nThe following lemma is obtained via an iterated application of the law of total covariance (\\ref{lawoftotalcovariance}) and the law of total conditional covariance (\\ref{lawofconditionaltotalcovariance}). One may note the similarities with the chain-rule for entropies (see e.g.~chapter 2 in \\cite{cover2012elements}).\n\\begin{Lemma}\n\\label{ChainDecomposition}\nLet $Y$ be a vector-valued random variable on a finite-dimensional real or complex inner product space $\\mathcal{V}$, let $X_1,\\ldots, X_N$ be random variables over the same probability space. Assuming that the underlying measure is such that all involved conditional expectations and covariances are well defined, then \n\\begin{equation}\n\\label{zioizu}\n\\mathrm{Cov}(Y) = R + \\sum_{n=1}^{N}C_n, \n\\end{equation}\nwhere $R$ and $C_1,\\ldots, C_N$ are positive semidefinite operators on the space $\\mathcal{V}$, defined by\n\\begin{equation}\n\\label{lldavaldl}\n\\begin{split}\nC_{1} := & \\mathrm{Cov}\\big( E(Y|X_1) \\big),\\\\\nC_{n} := & E\\Big(\\mathrm{Cov}\\big( E(Y|X_1,\\ldots,X_n)\\big|X_1,\\ldots,X_{n-1}\\big)\\Big),\\quad n = 2,\\ldots,N,\\\\\nR := & E\\big(\\mathrm{Cov}(Y|X_1,\\ldots,X_N)\\big).\n\\end{split}\n\\end{equation}\n\\end{Lemma}\nOne may note that the above decomposition is not necessarily unique; we could potentially obtain a new decomposition if the variables in the sequence $X_1,\\ldots, X_N$ are permuted.\n\\begin{proof}\nThe law of total covariance (\\ref{lawoftotalcovariance}) for $Z=X_1$, combined with the law of total conditional covariance (\\ref{lawofconditionaltotalcovariance}) for $Z:= X_1, W:= X_2$ yields \n\\begin{equation}\n\\label{dnjvadlkv}\n\\begin{split}\n\\mathrm{Cov}(Y) = & \\mathrm{Cov}\\big( E(Y|X_1) \\big) + E\\Big( \\mathrm{Cov}\\big( E(Y|X_2,X_1)\\big|X_1\\big) \\Big) + E\\big( \\mathrm{Cov}(Y|X_2,X_1)\\big). \n\\end{split}\n\\end{equation}\n\n\n\nSuppose that for some $j\\geq 2$ it would be true that\n\\begin{equation}\n\\label{kjdfvlkad}\n\\begin{split}\n\\mathrm{Cov}(Y) \n = & \\mathrm{Cov}\\big( E(Y|X_1) \\big) \\\\\n& +\\sum_{n=2}^{j} E\\bigg(\\mathrm{Cov}\\Big( E(Y|X_1,\\ldots,X_n)\\Big|X_1,\\ldots,X_{n-1} \\Big)\\bigg)\\\\\n& + E\\big( \\mathrm{Cov}(Y|X_1,\\ldots,X_j) \\big).\n\\end{split}\n\\end{equation}\nThe law of total conditional covariance (\\ref{lawofconditionaltotalcovariance}), with $W:= X_{j+1}$ and $Z := X_1,\\ldots, X_{j}$, gives\n\\begin{equation*}\n\\begin{split}\n\\mathrm{Cov}(Y|X_1,\\ldots, X_j) = & \\mathrm{Cov}\\Big( E(Y|X_1,\\ldots, X_j,X_{j+1})\\Big| X_1,\\ldots, X_j\\Big) + E\\Big( \\mathrm{Cov}(Y|X_1,\\ldots, X_j,X_{j+1})\\Big|X_1,\\ldots, X_j\\Big).\n\\end{split}\n \\end{equation*}\n By inserting this expression into the last line of (\\ref{kjdfvlkad}) one does again obtain (\\ref{kjdfvlkad}) but with $j$ substituted for $j+1$. By (\\ref{dnjvadlkv}) we can see that (\\ref{kjdfvlkad}) is true for $j = 2$. Thus, by induction to $j= N$, and the identifications in (\\ref{lldavaldl}), we obtain (\\ref{zioizu}).\n\n\n\n Note that $\\mathrm{Cov}\\big( E(Y|X_1,\\ldots,X_{n-1},X_n)\\big|X_1 = x_1,\\ldots,X_{n-1}=x_{n-1}\\big)$ is a positive semidefinite operator on $\\mathcal{V}$ for each value of $x_1,\\ldots,x_{n-1}$. Hence, by averaging over these variables, and thus implementing the expectation that yields $C_n$, we do still have a positive semidefinite operator on $\\mathcal{V}$. The same observation applies to $ R = E\\big( \\mathrm{Cov}(Y|X_1,\\ldots,X_N) \\big)$.\n\\end{proof}\n\n\n\\section{\\label{SecObsLat}Observable vs.\\ latent variables, and feature maps}\n\n\n\n\n\nHere we consider the decomposition developed in the previous section for the more specific setting of observable and latent variables.\n\nWe consider a collection of observable variables $O_1,\\ldots,O_M$. To each of these variables $O_{m}$ we associate a mapping $Y^{(m)}$, in some contexts referred to as a `feature map' \\cite{ScholkopfLearningWithKernels}, into a finite-dimensional vector space $\\mathcal{V}_m$. We denote the resulting vector-valued random variables by $Y_{m}:=Y^{(m)}(O_m)$, and for the sake of simplicity we will in the following tend to abuse the terminology and refer to the vectors $Y_m$ themselves as feature maps. \nWe also define the joint random vector $Y := \\sum_{m=1}^{M}Y_{m}$ on $\\mathcal{V} := \\bigoplus_{m=1}^{M}\\mathcal{V}_m$. (Hence, we can view $Y$ as the concatenation of the vectors $Y_m$.)\nOne should note that while we regard the observable variables $O_m$ as being part of the setup that is `given', the feature maps $Y^{(m)}$ are part of the analysis, and we are free to assign these as we see fit. (Concerning the question of how the test depends on this choice, see section \\ref{SecUniversalFeatureMaps}.)\n\n\\begin{figure}[h!]\n \\includegraphics[width= 11cm]{Figure2.pdf} \n\\caption{\\label{FigObservableUnobservable} {\\bf Observables, latent variables, and feature maps.} The model consists of a collection of observable variables $O_1,\\ldots, O_M$ and a collection of latent variables $L_1,\\ldots, L_N$. Via feature maps, each $O_m$ is mapped to a vector $Y_m$ in a vector space $\\mathcal{V}_m$. On the vector space $\\mathcal{V} = \\bigoplus_{m=1}^{M}\\mathcal{V}_m$ we define the joint random vector $Y := Y_1+\\cdots + Y_M$.}\n\\end{figure}\n\nLet $P_m$ denote the projector onto the subspace $\\mathcal{V}_m$ in $\\mathcal{V}$.\nWe divide the total covariance matrix $\\mathrm{Cov}(Y)$ into the cross-correlations between the separate observable quantities\n$\\mathrm{Cov}(Y) = [\\mathrm{Cov}(Y_{m},Y_{m'})]_{m,m'=1}^{M}$. One can note that $\\mathrm{Cov}(Y_{m},Y_{m'}) = P_m\\mathrm{Cov}(Y)P_{m'}$.\n\nFor a collection of latent variables $L_1,\\ldots,L_N$, we make the identifications $X_j:=L_j$ in Lemma \\ref{ChainDecomposition}.\nSimilarly as for the covariance matrix we decompose the operators $C_n$ and $R$ into `block-matrices'\n$C_n = [C_{n}^{m,m'}]_{m,m' = 1}^{M}$ and $R = [R^{m,m'}]_{m,m'=1}^{M}$, with $C_n^{m,m'} := P_m C_n P_{m'}$ and $R^{m,m'} := P_m R P_{m'}$, where we can write\n\\begin{equation}\n\\label{BlockForm}\n\\begin{split}\nC_{1}^{m,m'} = & \\mathrm{Cov}\\big( E(Y_m|L_1),\\, E(Y_{m'}|L_1) \\big),\\\\\nC_{n}^{m,m'} = & E\\bigg(\\mathrm{Cov}\\Big( E(Y_m|L_1,\\ldots,L_n),\\, E(Y_{m'}|L_1,\\ldots,L_n) \\Big|L_1,\\ldots,L_{n-1}\\Big)\\bigg),\\\\\nR^{m,m'} = & E\\big(\\mathrm{Cov}(Y_m,\\, Y_{m'}|L_1,\\ldots,L_N)\\big),\n\\end{split}\n\\end{equation}\nfor $2\\leq n\\leq N$.\nIn terms of these blocks we can thus reformulate (\\ref{zioizu}) as\n\\begin{equation}\n\\mathrm{Cov}(Y_{m},Y_{m'}) = R^{m,m'} + \\sum_{n=1}^{N}C_n^{m,m'}.\n\\end{equation}\nOne should keep in mind that $C_n^{m,m'}$ and $R^{m,m'}$ in the general case are matrices (rather than scalar numbers) for each single pair $m,m'$.\n\n\n\n\\section{\\label{SecDecBipartDAGs}Decomposition of the covariance matrix for bipartite DAGs}\n\n\n\nWe define a bipartite DAG as a finite DAG $G = (V,E)$ with vertices $V$ and edges $E$, with a bipartition $V = O\\cup L$, $O\\cap L = \\emptyset$ such that all edges in $E$ are directed from the elements in $L$ (the latent variables) to the elements in $O$ (the observables). Since $G$ is finite, we enumerate the elements of $O$ as $O_1,\\ldots,O_M$ and the elements of $L$ as $L_1,\\ldots, L_N$. One may note that we generally will overload the notation and let $O_m$ and $L_n$ denote the vertices in the underlying bipartite DAG, as well as denoting the random variables associated with these vertices. \n\n\n\nFor a vertex $v$ in a directed graph $G$ we let $\\mathrm{ch}(v)$ denote the children of $v$, i.e., the set of vertices $v'$ for which there is an edge directed from $v$ to $v'$. We let $\\mathrm{pa}(v)$ denote the parents of $v$, i.e., the set of vertices $v'$ for which there is an edge directed from $v'$ to $v$.\nFor bipartite DAGs an element in $L$ can only have children in $O$ (and have no parents), and an element in $O$ can only have parents in $L$ (and no children). As an example, for the bipartite DAG in figure \\ref{FigBipartite} we have $\\mathrm{ch}(L_1) = \\{O_1,O_2,O_3\\}$, $\\mathrm{ch}(L_2) = \\{O_2,O_5\\}$, and $\\mathrm{ch}(L_3) = \\{O_3,O_5\\}$, and $\\mathrm{pa}(O_1) = \\{L_1\\}$, $\\mathrm{pa}(O_2) = \\{L_1,L_2\\}$, $\\mathrm{pa}(O_{3}) = \\{L_1,L_3\\}$, $\\mathrm{pa}(O_{4}) = \\emptyset$, and $\\mathrm{pa}(O_5) = \\{L_2,L_3\\}$. \n\nFor a causal model defined by a general DAG $G = (V,E)$ the underlying probability distribution can be described via the Markov condition where each edge represents a direct causal influence, and thus each vertex $v$ can only be directly influenced by its parents $\\mathrm{pa}(v)$, resulting in distributions of the form $P = \\Pi_{v\\in V}P\\big(v\\big|\\mathrm{pa}(v)\\big)$. Hence, for a bipartite DAG we get $P = \\Pi_{m}P\\big(O_m\\big|\\mathrm{pa}(O_m)\\big)\\Pi_n P(L_n)$, and thus all the latent variables are independent, and the observables are independent when conditioned on the latent variables.\n\nAs in the previous section, we map the observables $O_1,\\ldots,O_M$ to vectors $Y_1,\\ldots,Y_M$ in vector spaces $\\mathcal{V}_1,\\ldots,\\mathcal{V}_M$. \nFor each $n$ we define the projector $P^{(n)}$ in $\\mathcal{V}$ by\n\\begin{equation}\n\\label{Pdef}\nP^{(n)} := \\sum_{m\\in \\mathrm{ch}(L_n)}P_{m}.\n\\end{equation}\nHence, $P^{(n)}$ is the projector onto all subspaces of $\\mathcal{V}$ that are associated with the children $\\mathrm{ch}(L_n)$ of the latent variable $L_n$. (In the above sum we should strictly speaking write $\\sum_{m:O_m\\in \\mathrm{ch}(L_n)}$. However, in order to avoid a too cumbersome notation we will from time to time take the liberty of writing $m\\in \\mathrm{ch}(L_n)$ rather than $O_m\\in \\mathrm{ch}(L_n)$, and $n\\in \\mathrm{pa}(O_m)$ rather than $L_n\\in \\mathrm{pa}(O_m)$.)\n\n\\begin{figure}[h!]\n \\includegraphics[height= 3.5cm]{Figure3.pdf} \n\\caption{\\label{FigTriangle} {\\bf Example: Triangular bipartite DAG.}\nThe covariance matrix resulting from the observables in a bipartite DAG is subject to a decomposition where each latent variable gives rise to a positive semidefinite component, and where the support of that component is determined by the children of the corresponding latent variable. In the case of the `triangular' scenario of the the bipartite DAG to the left, each of the three latent variables has two children. The covariance matrix, schematically depicted to the right, can consequently be decomposed into three positive semidefinite components, each with bipartite supports. This observation yields a method (which we refer to as the `semidefinite test') to falsify a given bipartite DAG as an explanation of an observed covariance matrix.\n}\n\\end{figure}\n\n\\begin{Proposition}\n\\label{PropDecomposition}\nFor a bipartite DAG with latent variables $L_1,\\ldots, L_N$ and observables $O_1,\\ldots, O_M$ with assigned feature maps $Y_1,\\ldots,Y_M$ into finite-dimensional real or complex inner-product spaces $\\mathcal{V}_1,\\ldots,\\mathcal{V}_M$, the covariance matrix of $Y = \\sum_{m=1}^{M}Y_m$ satisfies \n\\begin{equation}\n\\label{poeto}\n\\mathrm{Cov}(Y) = R + \\sum_{n=1}^{N}C_n,\\quad R\\geq 0,\\quad C_n\\geq 0,\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{acdsjd}\nP^{(n)}C_n P^{(n)} = C_n,\\quad R = \\sum_{m=1}^{M}P_m R P_m.\n\\end{equation}\nand where the projectors $P^{(n)}$ are as defined in (\\ref{Pdef}) with respect to the given bipartite DAG, and where $P_m$ is the projector onto $\\mathcal{V}_m$ in $\\bigoplus_{m=1}^{M}\\mathcal{V}_m$. \n\\end{Proposition}\nOne may note that if the span of the supports of $\\{P^{(n)}\\}_{n=1}^{N}$ covers $\\mathcal{V}$, then we can distribute the blocks $P_m RP_m$ of $R$ and add them to the different $C_n$ in such a way that the new operators still are positive semidefinite and satisfy the support structure of the original $C_n$s. The exception is if there is some observable that has no parent (as $O_4$ in figure \\ref{FigBipartite}).\n\n\\begin{proof}\nSelect an enumeration $L_{1},\\ldots, L_N$ of the latent variables. By Lemma \\ref{ChainDecomposition} we know that the covariance matrix $\\textrm{Cov}(Y)$ can be decomposed as in (\\ref{zioizu}) with the positive semidefinite operators $R$ and $C_n$ as defined in (\\ref{lldavaldl}).\nIn the following we will make use of the block-decomposition $C_n= [C^{m,m'}_n]_{m,m'=1}^M$ and $R = [R^{m,m'}]_{m,m'=1}^M$ with respect to the subspaces $\\mathcal{V}_1,\\ldots,\\mathcal{V}_M$ as in (\\ref{BlockForm}).\n\nIf $L_n\\notin \\mathrm{pa}(O_{m})$ then it means that $Y_m$ is independent of $L_n$ and thus\n\\begin{equation*}\n\\begin{split}\nE(Y_m|L_1,\\ldots,L_n) = E(Y_m|L_1,\\ldots,L_{n-1}).\n\\end{split}\n\\end{equation*}\nThe analogous statement is true if $L_{n}\\notin \\mathrm{pa}(O_{m'})$. \nBy this it follows that\n\\begin{equation}\n\\label{madflbm}\n\\begin{split}\n& \\mathrm{Cov}\\Big( E(Y_m|L_1,\\ldots,L_n),\\, E(Y_{m'}|L_1,\\ldots,L_n) \\Big|L_1,\\ldots,L_{n-1}\\Big) = 0,\\quad \\mathrm{if}\\quad L_n\\notin \\mathrm{pa}(O_{m})\\cap\\mathrm{pa}(O_{m'}).\n\\end{split}\n\\end{equation}\n\n\nNote that \n$ L_n\\in \\mathrm{pa}(O_{m})\\cap \\mathrm{pa}(O_{m'})\\,\\,\\Leftrightarrow\\,\\, O_m,O_{m'}\\in \\mathrm{ch}(L_n)$.\nBy comparing (\\ref{madflbm}) with (\\ref{BlockForm}) we can conclude that \n$C^{m,m'}_n = 0$ if $O_m\\notin \\mathrm{ch}(L_n)$ or $O_{m'}\\notin \\mathrm{ch}(L_n)$.\nThe definition of the projector $P^{(n)}$ in (\\ref{Pdef}) thus yields \n$P^{(n)}C_nP^{(n)} = C_n$. Moreover, we know from Lemma \\ref{ChainDecomposition} that $C_n\\geq 0$.\n\nBy construction, all the observables $O_1,\\ldots,O_M$ and thus also $Y_1,\\ldots,Y_M$ are independent when conditioned on the latent variables. Hence, \n\\begin{equation*}\nR^{m,m'} = E\\big(\\mathrm{Cov}(Y_m,\\,Y_{m'}|L_1,\\ldots,L_N )\\big)= \\delta_{m,m'}E\\big(\\mathrm{Cov}(Y_m|L_1,\\ldots,L_N)\\big),\n\\end{equation*}\nand thus $R = \\sum_{m=1}^{M}P_m RP_m$.\n\nOne may note that although the operators $C_n$ potentially may change if we generated them via a permutation of the sequence of latent variables $L_1,\\ldots,L_N$, the resulting projectors $P^{(n)}$ would not change. Hence, the support-structure described by (\\ref{poeto}) and (\\ref{acdsjd}) is stable under rearrangements of the sequence.\n\\end{proof}\n\n\nDeciding whether a given matrix is of the form (\\ref{poeto}) can be done via semi-definite programming (SDP).\nWe end this section by describing an explicit SDP formulation. \n\n\nThe optimization will be over matrices $Z$ which can be interpreted as the direct sum of candidates for $R$ and the $C_n$'s.\nMore precisely, let\n\\begin{eqnarray}\n\t\\mathcal{Z} &:=& \n\t\\mathcal{V}_1 \\oplus \\dots \\oplus \\mathcal{V}_M \\oplus \n\t\\mathcal{W}_1 \\oplus \\dots \\oplus \\mathcal{W}_N, \\label{eqn:Zblocks} \\\\\n\t\\mathcal{W}_i &:=& \\bigoplus_{m\\in\\mathrm{ch}(L_p)} \\mathcal{V}_m.\n\t\\label{eqn:Wblocks}\n\\end{eqnarray}\nLet $Z$ be a matrix on $\\mathcal{Z}$. \nAccording to the direct sum decomposition (\\ref{eqn:Zblocks}), the matrix $Z$ is a block matrix with $(M+N)\\times (M+N)$ blocks.\nWe think of the fist $M$ diagonal blocks as carrying candidates for $R_m=P_m R P_m$ (which completely defines $R$, according to (\\ref{acdsjd})); while the rear $N$ diagonal blocks correspond to candidate $C_n$'s.\nNote that the $N$ rear sumands in (\\ref{eqn:Zblocks}) are dirct sums themselves.\nIt therefore makes sense to use double indices to refer to spaces inside the $\\mathcal{W}_i$'s.\nConcretely, the SDP includes affine constraints on the blocks \n$Z^{(M+n,m),(M+n,m')}$. \nThe first part \nof the indices selects the space $\\mathcal{W}_n$ in (\\ref{eqn:Zblocks}). \nThe second part \nrefers to the space $\\mathcal{V}_m$ within $\\mathcal{W}_{n}$ according to (\\ref{eqn:Wblocks}).\nWe use the convention that $Z^{(M+n,m),(M+n,m')}$ denotes $0$ if either $\\mathcal{V}_{m}$ or $\\mathcal{V}_{m'}$ does not occur in $\\mathcal{W}_n$. \n\nWith these definitions, the semi-definite program that verifies whether a covariance matrix $\\mathrm{Cov}(Y)$ is of the form (\\ref{poeto}) reads\n\\begin{eqnarray}\n\t\\text{maximize}\\quad && 0 \\label{eqn:primal}\\\\\n\t\\text{subject to}\\quad && \n\t\\delta_{m,m'}\n\t\\sum_{m=1}^M Z^{(m),(m)}\n\t+\n\t\\sum_{n=1}^N Z^{(M+n,m),(M+n,m')} \n\t= \\mathrm{Cov}(Y)^{m,m'}, \\quad (m,m'=1, \\dots M) \\label{eqn:affineConstraints}\\\\\n\t&& Z \\geq 0,\n\\end{eqnarray}\nwhere the optimization is over symmetric (hermitian) matrices $Z$ on $\\mathcal{Z}$.\nUp to a trivial re-expression of the linear functions of $Z$ in terms of trace inner products with suitable matrices $F_i$, the optimization problem above is in the (dual) standard form of an SDP \\cite[Section~3]{vandenberghe1996semidefinite}. \n\nThe left-hand side of (\\ref{eqn:affineConstraints}) impliclity defines a linear map $\\mathcal{A}$ from matrices on $\\mathcal{Z}$ to matrices on $\\mathcal{V}$.\nExplicitly, $\\mathcal{A}$ maps off-diagonal blocks to $0$ and acts on block-diagonal matrices as\n\\begin{equation*}\n\t\\mathcal{A}: \n\tR_1 \\oplus \\dots \\oplus R_M \\oplus C_1 \\oplus \\dots \\oplus C_N\n\t\\mapsto\n\t\\sum_m R_m + \\sum_n C_n.\n\\end{equation*}\nThe constraints of the SDP can thus be written slightly more transparently as \n\\begin{eqnarray}\n\t&&\\mathcal{A}(Z) = \\mathrm{Cov}(Y), \\label{eqn:primalA} \\\\\n\t&&Z \\geq 0\n\\end{eqnarray}\n\nIn this language, the dual of the above SDP is\n\\begin{eqnarray}\n\t\\text{minimize}\\quad && \\mathrm{tr}\\,\\big(X\\,\\mathrm{Cov}(Y)\\big) \\label{eqn:dual} \\\\\n\t\\text{subject to}\\quad && \n\t\\mathcal{A}^\\dagger(X) \\geq 0.\n\\end{eqnarray}\nLet $X^\\star$ be the optimizer of (\\ref{eqn:dual}).\nIf $\\mathrm{tr}\\big(X^\\star\\,\\mathrm{Cov}(Y)\\big)<0$, then the original SDP is infeasible and therefore, $\\mathrm{Cov}(Y)$ is not of the form (\\ref{poeto}).\nIndeed, by construction, such an $X^\\star$ has a negative trace inner product with the covariance matrix, but a positive trace inner product \n\\begin{equation*}\n\t\\mathrm{tr}\\,\\big(\\mathcal{A}(Z) X\\big)\n\t=\n\t\\mathrm{tr}\\,\\big(Z \\mathcal{A}^\\dagger(X)\\big)\n\t\\geq \n\t0\n\t\\qquad\n\t\\forall Z \\geq 0\n\\end{equation*}\nwith all matrices $\\mathcal{A}(Z), Z\\geq 0$ that could potentially be feasible for the primal SDP (\\ref{eqn:primalA}).\nThus, the dual SDP (\\ref{eqn:dual}) can be used to find a \\emph{witness} or a \\emph{dual certificate} $X^\\star$ for the incompatibility of a covariance matrix with a presumed causal structure.\nThe geometry of the involved objects is shown in figure~\\ref{fig:witness}.\nWe will refer to this dual construction in section~\\ref{SecSummaryOutlook}, where we sketch possibilities to base statistical hypothesis tests such witnesses.\n\n\\begin{figure}[h!]\n \\includegraphics[width=8cm]{Figure4.pdf} \n\\caption{\\label{fig:witness} {\\bf Dual Certificates.} \n\tThe set of covariance matrices compatible with a certain causal structure in the sense of proposition~\\ref{PropDecomposition} forms a convex cone $\\Gamma$. \n\tThe cone is the feasible set of the SDP (\\ref{eqn:primal}).\n\tIf a given covariance matrix $\\mathrm{Cov}(Y)$ is \\emph{not} an element of that cone, then there exists a hyperplane (depicted in red) seperating the two convex sets.\n\tA normal vector $X^\\star$ for the seperating hyperplane can be found using the dual SDP (\\ref{eqn:dual}).\n}\n\\end{figure}\n\n\n\\section{\\label{SecConverse}Realizing a given decomposition}\n\n\nIn the previous section we have shown that the observable covariance matrix associated with a given bipartite DAG always satisfies a particular semidefinite decomposition implied by that DAG. Here we show the converse, in the sense that if we have a positive semidefinite operator that satisfies the decomposition obtained from a particular bipartite DAG, then there exists a causal model associated with that DAG that has the given operator as its observable covariance matrix (see figure \\ref{FigConverse}).\nThe proof is based on the observation that each positive semidefinite operator on a vector space can be interpreted as the covariance of a vector-valued random variable on that space (e.g.~as the covariance of a multivariate normal distribution, or of variable over finite alphabets, as discussed in section \\ref{SecRealPosdefCovm}). The essential idea is that we assign an independent random variable to each component in the decomposition, and take these as the latent variables, and that the support structure of the components furthermore determines the children of the latent variables.\n\n\n\n\n\\subsection{\\label{SecRealDecompo}Realization of decompositions}\n\nLet $O$ be a finite set, and let $\\{\\Omega_n\\}_{n=1}^{N}$ be a collection of subsets of $O$. The collection $\\{\\Omega_n\\}_{n=1}^{N}$ defines a bipartite DAG with $O$ as observable nodes, and a set of latent nodes $L_1,\\ldots, L_N$, with the edges assigned by the identification $\\mathrm{ch}(L_n):=\\Omega_n$ for $n= 1,\\ldots,N$. In the following we denote this bipartite DAG by $B(\\{\\Omega_n\\}_{n=1}^{N})$.\n\n\n\n\\begin{figure}[h!]\n \\includegraphics[width= 11cm]{Figure5.pdf} \n\\caption{\\label{FigConverse} {\\bf } A positive semidefinite operator on a set of selected orthogonal subspaces can be regarded as the covariance matrix of a corresponding collection of vector-valued variables. If this operator separates into positive semidefinite components (as schematically depicted to the left), then the support structures of these components define a bipartite DAG (on the right). The components in the decomposition can be interpreted as the covariance matrices of independent vector-valued latent variables. Moreover, the collection of subspaces on which such an operator has support determines the observable children of the corresponding latent variable. Each observable variable can be constructed by adding the components collected from its parents. \n}\n\\end{figure}\n\n\n\n\n\\begin{Proposition}\n\\label{PropRealDecomp}\nLet $\\mathcal{V}_1,\\ldots,\\mathcal{V}_M$ be finite-dimensional real or complex inner-product spaces. \nFor a number $N$ let $\\{\\Omega_n\\}_{n=1}^{N}$ be a collection of subsets $\\Omega_n\\subset \\{1,\\ldots,M\\}$. \nSuppose that $Q$ is a positive semidefinite operator on the space $\\mathcal{V} = \\mathcal{V}_1\\oplus\\cdots\\oplus \\mathcal{V}_M$, and that it can be written\n\\begin{equation}\nQ = R + \\sum_{n=1}^NC_n,\\quad P^{(n)}C_nP^{(n)} = C_n, \\quad R\\geq 0, \\quad C_n\\geq 0,\n\\end{equation}\nfor \n\\begin{equation}\n\\label{nvfdalavnl}\nP^{(n)} =\\sum_{m\\in \\Omega_n}P_m,\\quad R = \\sum_{m=1}^{M}P_m R P_m,\n\\end{equation}\nwith $P_m$ being the projectors onto the subspaces $\\mathcal{V}_m$. Then there exists a causal model for the bipartite DAG $B(\\{\\Omega_n\\}_{n=1}^{N})$ with vector-valued variables $Y_1,\\ldots, Y_M$ in $\\mathcal{V}_1,\\ldots,\\mathcal{V}_M$ such that $Y = Y_1+\\cdots +Y_M$ satisfies \n\\begin{equation}\n\\mathrm{Cov}(Y) = Q.\n\\end{equation}\n\\end{Proposition}\n\n\n\n\n\n\n\\begin{proof}\nLet us define the set $\\Omega := \\cup_{n=1}^{N}\\Omega_n$ and its complement $\\Omega^{c} := \\{1,\\ldots,M\\}\\setminus \\Omega$. By construction, \n$\\Omega^c$ is the set of observable nodes in the bipartite DAG $B(\\{\\Omega_{n}\\}_{n=1}^{N})$ that have no parents (like vertex $4$ in figure \\ref{FigBipartite}) and thus each element in $\\Omega$ has at least one parent. By the definition of $P^{(n)}$ in (\\ref{nvfdalavnl}) it follows that $\\sum_{m'\\in \\Omega^c}P_{m'} C_n = C_n \\sum_{m'\\in \\Omega^c}P_{m'} = 0$. In other words, the operators $C_n$ have no support on the subspaces belonging to parentless observable nodes. Let us now turn to the operator $R$ and its block diagonal decomposition $R = \\sum_{m=1}^{M}R_m$ with $R_m := P_mRP_m$. We can write $R = \\sum_{m'\\in \\Omega^c}R_{m'} + \\sum_{m\\in \\Omega}R_{m}$. Consequently, $Q$ can be decomposed in one operator $\\sum_{m\\in \\Omega}R_{m} + \\sum_{n=1}^NC_n$ on the subspace $\\bigoplus_{m\\in \\Omega}\\mathcal{V}_m$, and a collection of blocks $\\{R_{m'}\\}_{m'\\in \\Omega^c}$ on the corresponding subspaces $\\mathcal{V}_{m'}$ for $m'\\in \\Omega^c$. Since $R_{m'}$ is positive semidefinite, it can be interpreted as the covariance matrix of some random vector $Y_{m'}$ in $\\mathcal{V}_{m'}$. In the following we assume that we have made such an assignment for all $m'\\in \\Omega^{c}$. We also assume that these random vectors are independent. \n\nEach $R_m$ for $m\\in \\Omega$ has its support inside the support of at least one $C_n$. Hence, we can `distribute' the operators $R_m$ for $m\\in \\Omega$ by forming new positive semidefinite operators $\\tilde{C}_n\\geq 0$ such that \n\\begin{equation}\n\\label{nvnklva}\n\\sum_{m\\in \\Omega}R_m + \\sum_{n=1}^{N}C_n = \\sum_{n=1}^{N}\\tilde{C}_n =: \\tilde{Q}, \n\\end{equation}\nwhere one may note that $Q = \\sum_{m'\\in \\Omega^c}R_{m'} + \\widetilde{Q}$.\n\nIn the following we shall assign observable and latent random variables to the vertices of the bipartite DAG $B(\\{\\Omega_{n}\\}_{n=1}^{N})$.\nFor each $n\\in \\{1,\\ldots,N\\}$ and each $m\\in \\Omega$, let $\\mathcal{L}^{n}_m$ be a vector space that is isomorphic to $\\mathcal{V}_m$, and let $\\phi^{n}_m:\\mathcal{L}^{n}_m \\rightarrow \\mathcal{V}_m$ be an arbitrary isomorphism. (We assume that these isomorphisms preserve the inner-product structure, such that $\\phi^n_m$ maps orthonormal bases of $\\mathcal{L}^n_m$ to orthonormal bases of $\\mathcal{V}_m$.) We regard the spaces in the collection $\\{\\mathcal{L}^{n}_m\\}_{m\\in\\Omega, n=1,\\ldots, N}$ as being orthogonal to each other. Define $\\mathcal{L}^{n} : = \\bigoplus_{m\\in \\Omega}\\mathcal{L}^{n}_{m}$, and the corresponding isomorphism $\\phi^{n} := \\sum_{m\\in \\Omega}\\phi^{n}_m$. \nSince each $\\tilde{C}_n$ is positive semidefinite, it can be interpreted as the covariance matrix of a vector-valued random variable on $\\bigoplus_{m\\in \\Omega}\\mathcal{V}_m$. Consequently, we can also find a vector-valued random variable $L_n$ on $\\mathcal{L}^{n}$ such that \n\\begin{equation}\n\\label{nlkalkn}\n\\begin{split}\n\\tilde{C}_n =& \\mathrm{Cov}(\\phi^{n}L_n)= \\phi^{n}\\mathrm{Cov}(L_n){\\phi^{n}}^{\\dagger}.\n\\end{split}\n\\end{equation}\nWe assume that the random variables $L_1,\\ldots,L_N$ are independent of each other, and also independent of $\\{Y_{m'}\\}_{m'\\in \\Omega^c}$. \n\n\nThe variables $L_1,\\ldots,L_N$ serve as the latent variables corresponding to the latent nodes in the bipartite DAG $B(\\{\\Omega_{n}\\}_{n=1}^{N})$. In the following we shall construct a collection of vector-valued variables $\\{Y_{m}\\}_{m\\in \\Omega}$ as deterministic functions of the latent variables $L_1,\\ldots,L_N$, in such a way that these functions correspond to the arrows in $B(\\{\\Omega_{n}\\}_{n=1}^{N})$, thus guaranteeing a valid causal model associated with this bipartite DAG.\n\nLet us decompose the vector $L_n$ into its projections $L^{n}_m$ onto the subspaces $\\mathcal{L}^{n}_m$. For each $m\\in \\Omega_n = \\mathrm{ch}(L_n)$, the vector $L^{n}_m$ is associated to the observable node $O_m$. (One can imagine it to be transferred to node $O_m$.) Equivalently we can say that each observable node $m\\in \\Omega$ receives the vector $L^{n}_m$ from its ancestor $n\\in \\mathrm{pa}(O_m)$.\nOn the observable node $m\\in \\Omega$ we construct a new vector $Y_m$ by adding all the vectors `sent to it' from its parents\n\\begin{equation}\nY_m := \\sum_{n\\in \\mathrm{pa}(O_m)}\\phi^{n}_mL^{n}_m = \\sum_{n\\in \\mathrm{pa}(O_m)}\\phi^{n}_mL_n\n= \\sum_{n=1}^{N}\\phi^{n}_mL_n,\n\\end{equation}\nwhere the last equality follows since $P_mC_nP_m = 0$ if $O_m\\notin \\mathrm{ch}(L_n)$, or equivalently if $L_n\\notin \\mathrm{pa}(O_m)$, and thus $\\phi^{n}_mL_n = 0$ if $n\\notin \\mathrm{pa}(O_m)$. \nThe collection $\\{Y_{m'}\\}_{m'\\in \\Omega^c}\\cup \\{Y_m\\}_{m\\in \\Omega}$ we take as the observable variables, and we define $Y:= \\sum_{m'\\in \\Omega^c} Y_{m'} + \\sum_{m\\in \\Omega}Y_m = \\sum_{m'\\in \\Omega^c} Y_{m'} + \\sum_{n=1}^{N}\\phi^nL_n$.\n\nDue to the fact that all $Y_{m'}$ for $m'\\in \\Omega^c$ are independent, and also independent of all $L_n$, we get\n\\begin{equation*}\n\\begin{split}\n\\mathrm{Cov}(Y)= & \\sum_{m'\\in \\Omega^c}\\mathrm{Cov}(Y_{m'}) + \\mathrm{Cov}\\Big(\\sum_{n=1}^{N}\\phi^nL_n, \\sum_{n'=1}^{N}\\phi^{n'}L_{n'}\\Big)\\\\\n= & \\sum_{m'\\in \\Omega^c}R_{m'} + \\sum_{n,n'=1}^{N}\\phi^n\\mathrm{Cov}(L_n,L_{n'}){\\phi^{n'}}^{\\dagger}\\\\\n&[\\textrm{$L_1,\\ldots,L_N$ are independent}]\\\\\n= & \\sum_{m'\\in \\Omega^c}R_{m'} +\\sum_{n=1}^{N}\\phi^n\\mathrm{Cov}(L_n){\\phi^n}^{\\dagger}\\\\\n&[\\textrm{By (\\ref{nlkalkn})}]\\\\\n= & \\sum_{m'\\in \\Omega^c}R_{m'} +\\sum_{n=1}^{N}\\tilde{C}_n\\\\\n&[\\textrm{By (\\ref{nvnklva})}]\\\\\n= &Q. \n\\end{split}\n\\end{equation*}\n\\end{proof}\n\n\n\\subsection{\\label{SecRealPosdefCovm}Positive semidefinite operators as covariance matrices of vector-valued random variables over finite alphabets}\n\n\nThe material in the previous section presumes the existence of realizations of positive semidefinite operators as the covariance of some vector-valued variable, without making any restriction on ther nature. As mentioned above, each positive semi-definite operator (over a finite-dimensional real or complex vector space) can be regarded as the covariance of a multivariate normal distribution. However, suppose that we would require that the variable only can take a finite number of outcomes. Here we briefly discuss the conditions for such realizations, and provide an explicit construction (in the proof of Lemma \\ref{CondPosd}).\n\n \n\nFor a (possibly vector-valued) random variable over a finite alphabet, we say that that the supported alphabet size is $D$, if there are precisely $D$ outcomes that occur with a non-zero probability.\n\\begin{Lemma}\n\\label{nbklfdbnl}\nIf a random variable $Y$ on a finite-dimensional real or complex inner-product space has a supported alphabet size $D$, then ${\\mathrm{rank}}\\big(\\mathrm{Cov}(Y)\\big) \\leq D-1$. \n\\end{Lemma}\n\\begin{proof}\n We first note that $\\mathrm{Cov}(Y) = \\sum_{j=1}^{D}p_jy_jy_j^{\\dagger} - \\sum_{j=1}^{D}p_jy_j \\sum_{j'=1}^{D}p_{j'}y_{j'}^{\\dagger}$.\nSince $\\sum_{j=1}^{D}p_jy_j$ very manifestly is a linear combination of $y_1,\\ldots,y_D$, it follows that the range of $\\sum_{j=1}^{D}p_jy_j \\sum_{j'=1}^{D}p_{j'}y_{j'}^{\\dagger}$ is a subset of the range of $\\sum_{j=1}^{D}p_jy_jy_j^{\\dagger}$, and thus ${\\mathrm{rank}}\\big(\\mathrm{Cov}(Y)\\big) \\leq {\\mathrm{rank}}\\big(\\sum_{j=1}^{D}p_jy_jy_j^{\\dagger}\\big)\\leq D$. However, in the following we shall show that the stronger inequality ${\\mathrm{rank}}\\big(\\mathrm{Cov}(Y)\\big) \\leq D-1$ holds. To see this, let us first consider the case that $y_1,\\ldots,y_D$ are linearly dependent. This means that at least one of these vectors is a linear combination of the others, and thus\n ${\\mathrm{rank}}\\big(\\mathrm{Cov}(Y)\\big) \\leq D-1$. \nLet us now instead assume that $y_1,\\ldots,y_D$ is a linearly independent set. Define $Q := [Q_{j,j'}]_{j,j' =1}^{D}$ by $Q_{j,j'}:= p_j\\delta_{j,j'} - p_{j}p_{j'}$, then $\\mathrm{Cov}(Y) = \\sum_{j,j'}y_{j}Q_{j,j'}y_{j'}^{\\dagger}$. Hence, $Q$ is the matrix representation of $\\mathrm{Cov}(Y)$ with respect to the linearly independent, but not necessarily orthonormal set $y_1,\\ldots,y_D$. One can realize that due to the linear independence, it follows that ${\\mathrm{rank}}\\big(\\mathrm{Cov}(Y)\\big) = {\\mathrm{rank}}(Q)$. Finally, let us define the $D$-dimensional vector $\\overline{1} := (1,\\ldots,1)^{\\dagger}\/\\sqrt{D}$. One can confirm that $Q\\overline{1} = 0$. Hence, ${\\mathrm{rank}}(Q) \\leq D-1$, and we can conclude that ${\\mathrm{rank}}\\big(\\mathrm{Cov}(Y)\\big) \\leq D-1$.\n\\end{proof}\n\n\n\n\n\\begin{Lemma}\n\\label{CondPosd}\nLet $C$ be a positive semidefinite operator on a finite-dimensional real or complex inner-product space $\\mathcal{V}$. \nFor every $D \\geq {\\mathrm{rank}}(C) +1$ there exists a vector-valued random variable $Y$ on $\\mathcal{V}$ with supported alphabet size $D$, such that $C = \\mathrm{Cov}(Y)$. However, $C \\neq \\mathrm{Cov}(Y)$ for all $Y$ with a supported alphabet size $D < {\\mathrm{rank}}(C)+1$.\n\\end{Lemma}\n\n\n\n\\begin{proof}\nLet $D$ be the supported alphabet size of a vector-valued random variable $Y$.\nIf $D < {\\mathrm{rank}}(C) +1$, then we know from Lemma \\ref{nbklfdbnl} that $C\\neq \\mathrm{Cov}(Y)$. Hence, it remains to show that it is possible to find a $Y$ such that $C = \\mathrm{Cov}(Y)$ for every $D \\geq {\\mathrm{rank}}(C)+1$. We thus wish to find a collection of vectors $y_1,\\ldots,y_D\\in\\mathcal{V}$, and $p_1,\\ldots,p_D$ with $p_j >0$, and $\\sum_{j=1}^Dp_j = 1$, such that $C = \\sum_{j=1}^{D}p_jy_jy_j^{\\dagger} - \\sum_{j=1}^{D}p_jy_j \\sum_{j'=1}^{D}p_{j'}y_{j'}^{\\dagger}$.\n\n\nLet $\\{z_k\\}_{k=1}^K$ be an orthonormal basis of the range (support) of the operator $C$, and let $P_C$ be the projector onto the range. Let $U$ be a matrix in $\\mathbb{R}^{D\\times D}$ ($\\mathbb{C}^{D\\times D}$) if the underlying space $\\mathcal{V}$ is real (complex). Since $D\\geq K+1$, we can assign the $(K+1)$th column of $U$ to be the vector $\\overline{1}:=(1,\\ldots, 1)^{\\dagger}\/\\sqrt{D}$ (i.e., $U_{j,K+1} = \\frac{1}{\\sqrt{D}}$ for all $j= 1,\\ldots, D$) and we arbitrarily complete the rest of the matrix $U$ such that it becomes orthogonal (unitary). Since $U$ is orthogonal (unitary), it follows that its columns form an orthonormal basis of $\\mathbb{R}^{D}$ ($\\mathbb{C}^{D}$). Hence, for each $k=1,\\ldots, K$ it must be the case that the vector $(U_{j,k})_{j=1}^{D}$ is orthogonal to $\\overline{1}$, and thus \n\\begin{equation}\n\\label{weorpb}\n\\sum_{j=1}^{D}U_{j,k} = 0,\\quad k=1,\\ldots, K.\n\\end{equation}\nNext, define the set of vectors $\\{v_{j}\\}_{j=1}^{D}\\subset \\mathcal{V}$ by $v_{j} := \\sum_{k=1}^{K}U_{j,k}z_{k}$.\nOne can confirm that $\\sum_{j=1}^{D}v_jv_j^{\\dagger} = P_C$, as well as $\\sum_{j=1}^{D}v_j\n= \\sum_{k=1}^{K}\\sum_{j=1}^{D}U_{j,k}z_{k}= 0$, where we use (\\ref{weorpb}).\nAs the final step we define $p_j := \\frac{1}{D}$ and $y_j := \\sqrt{D}\\sqrt{C}v_j$ for $j =1,\\ldots,D$.\nOne can confirm that \n\\begin{equation*}\n\\begin{split}\n\\sum_{j=1}^{D}p_jy_jy_j^{\\dagger} = \\sqrt{C}\\sum_{j=1}^{D}v_jv_j^{\\dagger}\\sqrt{C}= \\sqrt{C}P_C\\sqrt{C}= C,\\quad\\quad \\sum_{j=1}^{D}p_jy_j = \\frac{1}{\\sqrt{D}}\\sqrt{C}\\sum_{j=1}^{D}v_j= 0.\n\\end{split}\n\\end{equation*}\nThus, if a vector-valued random variable $Y$ takes $y_j$ with probability $p_j$, we have $\\mathrm{Cov}(Y) = C$.\n\\end{proof}\n\n\n\n\n\\section{\\label{SecOperatorInequalities}Implied operator inequalities}\n\n Here we show that the existence of positive semidefinite decompositions as in Proposition \\ref{PropDecomposition} implies operator inequalities of a type studied in \\cite{VonPrillwitz15MasterThesis}. \n\nConsider as usual a bipartite DAG with latent variables $L_1,\\ldots, L_N$ and observables $O_1,\\ldots, O_M$ with assigned feature maps $Y_1,\\ldots,Y_M$ into vector spaces $\\mathcal{V}_1,\\ldots,\\mathcal{V}_M$.\nFor a number $d$ (whose meaning is going to be evident shortly) we define the following map on the space of operators on $\\mathcal{V} = \\oplus_{m=1}^M\\mathcal{V}_m$ \n\\begin{equation}\n\\label{Mapdef}\n\\Phi(Q) := (d-1)P_1QP_1 + \\sum_{m=2}^{M}(P_mQP_m + P_1QP_m + P_mQP_1),\n\\end{equation}\nwhere $P_m$ are the projectors onto the spaces $\\mathcal{V}_m$ as discussed in section \\ref{SecObsLat}.\nTheorem 4.1 in \\cite{VonPrillwitz15MasterThesis} does in essence say that if all the latent variables $L_n$ in the given bipartite DAG have degree at most $d$, then the resulting covariance matrix $\\mathrm{Cov}(Y)$ satisfies \n\\begin{equation}\n\\label{fmbkdlmfb}\n\\Phi\\big(\\mathrm{Cov}(Y)\\big)\\geq 0,\n\\end{equation}\nor if one prefers matrix notation\n\\begin{equation}\n\\label{savlnkaf}\n\\left[\\begin{matrix}\n (d-1)\\mathrm{Cov}(Y_1) & \\mathrm{Cov}(Y_1,Y_2) & \\cdots & \\cdots & \\mathrm{Cov}(Y_1,Y_M)\\\\\n\\mathrm{Cov}(Y_2,Y_1) & \\mathrm{Cov}(Y_2) & 0 & \\cdots & 0 \\\\\n\\vdots & 0 & \\ddots & \\ddots &\\vdots \\\\\n\\vdots & \\vdots & \\ddots & \\ddots & 0\\\\\n\\mathrm{Cov}(Y_M,Y_1) & 0 &\\cdots & 0 & \\mathrm{Cov}(Y_M) \n\\end{matrix}\\right]\\geq 0.\n\\end{equation} \nHence, by deleting a particular collection of blocks from the full covariance matrix $\\mathrm{Cov}(Y)$, and adding copies of the diagonal block $\\mathrm{Cov}(Y_1)$, we obtain a positive semidefinite operator. It may be worth emphasizing that mere positive semidefiniteness of $Q$ is not enough to guarantee that $\\Phi(Q)$ is positive semidefinite. Hence, (\\ref{fmbkdlmfb}) can indeed be used as a test of the underlying latent structure. As one may note, equations (\\ref{fmbkdlmfb}) and (\\ref{savlnkaf}) single out observable $1$, but by relabelling we can obtain analogous inequalities for all observables. As an example, for the triangular scenario in figure \\ref{FigTriangle}, the inequality (\\ref{savlnkaf}) and its permutations take the form\n\\begin{equation*}\n\\left[\\begin{smallmatrix}\n\\mathrm{Cov}(Y_1) & \\mathrm{Cov}(Y_1,Y_2) & 0\\\\\n\\mathrm{Cov}(Y_2,Y_1) & \\mathrm{Cov}(Y_2) & \\mathrm{Cov}(Y_2,Y_3)\\\\\n0 & \\mathrm{Cov}(Y_3,Y_2) &\\mathrm{Cov}(Y_3) \n\\end{smallmatrix}\\right]\\geq 0,\\quad \n\\left[\\begin{smallmatrix}\n\\mathrm{Cov}(Y_1) & \\mathrm{Cov}(Y_1,Y_2) & \\mathrm{Cov}(Y_1,Y_3)\\\\\n\\mathrm{Cov}(Y_2,Y_1) & \\mathrm{Cov}(Y_2) & 0\\\\\n\\mathrm{Cov}(Y_3,Y_1) & 0 &\\mathrm{Cov}(Y_3) \n\\end{smallmatrix}\\right]\\geq 0,\\quad \n\\left[\\begin{smallmatrix}\n\\mathrm{Cov}(Y_1) & 0 & \\mathrm{Cov}(Y_1,Y_3)\\\\\n0 & \\mathrm{Cov}(Y_2) & \\mathrm{Cov}(Y_2,Y_3)\\\\\n\\mathrm{Cov}(Y_3,Y_1) & \\mathrm{Cov}(Y_3,Y_2) &\\mathrm{Cov}(Y_3) \n\\end{smallmatrix}\\right]\\geq 0.\n\\end{equation*}\n\n\nThe following proposition shows that the semidefinite decomposition implies the operator inequality (\\ref{fmbkdlmfb}) under the assumption that all the latent variables (regarded as vertices in a bipartite graph) have the degree at most $d$.\n\\begin{Proposition}\nFor a bipartite DAG with latent variables $L_1,\\ldots, L_N$, each with degree at most $d$, and observables $O_1,\\ldots, O_M$ with assigned feature maps $Y_1,\\ldots,Y_M$ into finite-dimensional real or complex inner-product spaces $\\mathcal{V}_1,\\ldots,\\mathcal{V}_M$, the covariance matrix of $Y = \\sum_{m=1}^{M}Y_m$ satisfies $\\Phi\\big(\\mathrm{Cov}(Y)\\big)\\geq 0$, where $\\Phi$ is as defined in (\\ref{Mapdef}).\n\\end{Proposition}\n\n\n\\begin{proof}\nWe know from Proposition \\ref{PropDecomposition} that $\\mathrm{Cov}(Y) = R+\\sum_{n=1}^NC_n$ with $P^{(n)}C_nP^{(n)} = C_n$, $C_n\\geq 0$, and where $R$ is such that $\\sum_{m}P_m R P_m = R$ and $R\\geq 0$. Due to this, we have \n\\begin{equation}\n\\label{abkdfbieuf}\n\\Phi(R) = (1-d)P_1RP_1 + \\sum_{m=2}^{M}P_mRP_m \\geq 0.\n\\end{equation}\n For each $C_n$ we can distinguish two cases. \n\nIn the first case, $C_n$ has no support on $\\mathcal{V}_1$, i.e., $P_1C_nP_1 = 0$. Due to the positive semidefiniteness of $C_n$ it also follows that $P_1C_nP_j = 0$ for $j = 2,\\ldots,M$, and thus\n\\begin{equation}\n\\label{mlvdmld}\n \\Phi(C_n) = \\sum_{m=2}^{M}P_m C_n P_m \\geq 0.\n\\end{equation}\n In the second case, $C_n$ does have a support on $\\mathcal{V}_1$, meaning that $P_1C_n P_1 \\neq 0$. By assumption, the latent variable $L_n$ has degree at most $d$, which means that $C_n$ has support on at most $d$ of the subspaces $\\mathcal{V}_1,\\ldots,\\mathcal{V}_M$. Hence, apart form $\\mathcal{V}_1$, there are at most $d-1$ further spaces involved. We enumerate these spaces as $\\mathcal{V}_{m(2)},\\ldots,\\mathcal{V}_{m(d)}$, and let $\\mathcal{V}_{m(1)} = \\mathcal{V}_1$. Hence, it may be the case that $P_{m(j)}C_n P_{m(j)} \\neq 0$ for $j=1,\\ldots, d$, while $P_{m}C_n P_m = 0$ for the remaining values of $m$. Due to the positive semidefiniteness of $C_n$, we can analogously have $P_1C_n P_{m(j)}\\neq 0$, and $P_{m(j)}C_n P_{1}\\neq 0$, but $P_1C_n P_{m}= 0$, and $P_{m}C_n P_{1} = 0$ for the other values of $m$. We can conclude that \n\\begin{equation}\n\\label{salkdfm}\n\\begin{split}\n\\Phi(C_n) = & (d-1)P_1C_nP_1 + \\sum_{m=2}^{M}(P_mC_nP_m + P_1C_nP_m + P_m C_nP_1)\\\\\n= & (d-1)P_1C_nP_1 + \\sum_{j=2}^{d}(P_{m(j)}C_nP_{m(j)} + P_1C_nP_{m(j)} + P_{m(j)}C_n P_1)\\\\\n= & \\sum_{j=2}^{d}\\Big(P_1 +P_{m(j)}\\Big) C_n\\Big(P_1 + P_{m(j)}\\Big)\\geq 0.\n\\end{split}\n\\end{equation}\nThe combination of (\\ref{abkdfbieuf}), (\\ref{mlvdmld}) and (\\ref{salkdfm}) yields $\\Phi\\big(\\mathrm{Cov}(Y)\\big) = \\Phi(R) + \\sum_{n=1}^{N}\\Phi(C_n)\\geq 0$, which proves (\\ref{fmbkdlmfb}).\n\\end{proof}\n\n\nWe note that the operator inequalities derived here need not be tight in all cases.\nIndeed, it is not hard to verify that the maps $\\Phi_\\alpha$ defined by\n\\begin{equation*}\n\t\\Phi_\\alpha: \n\\left[\\begin{smallmatrix}\n\\mathrm{Cov}(Y_1) & \\mathrm{Cov}(Y_1,Y_2) & \\mathrm{Cov}(Y_1,Y_3)\\\\\n\\mathrm{Cov}(Y_2,Y_1) & \\mathrm{Cov}(Y_2) & \\mathrm{Cov}(Y_2,Y_3)\\\\\n\\mathrm{Cov}(Y_3,Y_1) & \\mathrm{Cov}(Y_3,Y_2) &\\mathrm{Cov}(Y_3) \n\\end{smallmatrix}\\right]\n\\mapsto\n\\left[\\begin{smallmatrix}\n\\mathrm{Cov}(Y_1) & e^{i\\alpha} \\mathrm{Cov}(Y_1,Y_2) & \\mathrm{Cov}(Y_1,Y_3)\\\\\ne^{-i\\alpha}\\mathrm{Cov}(Y_2,Y_1) & \\mathrm{Cov}(Y_2) & \\mathrm{Cov}(Y_2,Y_3)\\\\\n\\mathrm{Cov}(Y_3,Y_1) & \\mathrm{Cov}(Y_3,Y_2) &\\mathrm{Cov}(Y_3) \n\\end{smallmatrix}\\right]\n\\end{equation*}\npreserve the set of covariance matrices compatible with the triangle scenario.\nHere, $\\alpha\\in[0,2\\pi)$ is a phase factor.\nIn particular, $\\Phi_\\alpha$ preserves positivity when acting on covariance matrices arising in this context.\nThis is a strictly stronger result than the one we have obtained above:\nThe map $\\Phi$ treated in the proposition is just the equal-weight convex combination of $\\Phi_\\pi$ and $\\Phi_0$:\n\\begin{equation*}\n \\frac12\n \\left[\\begin{smallmatrix}\n \\mathrm{Cov}(Y_1) & \\mathrm{Cov}(Y_1,Y_2) & \\mathrm{Cov}(Y_1,Y_3)\\\\\n \\mathrm{Cov}(Y_2,Y_1) & \\mathrm{Cov}(Y_2) & \\mathrm{Cov}(Y_2,Y_3)\\\\\n \\mathrm{Cov}(Y_3,Y_1) & \\mathrm{Cov}(Y_3,Y_2) &\\mathrm{Cov}(Y_3) \n \\end{smallmatrix}\\right]\n +\n \\frac12\n \\left[\\begin{smallmatrix}\n \\mathrm{Cov}(Y_1) & -\\mathrm{Cov}(Y_1,Y_2) & \\mathrm{Cov}(Y_1,Y_3)\\\\\n -\\mathrm{Cov}(Y_2,Y_1) & \\mathrm{Cov}(Y_2) & \\mathrm{Cov}(Y_2,Y_3)\\\\\n \\mathrm{Cov}(Y_3,Y_1) & \\mathrm{Cov}(Y_3,Y_2) &\\mathrm{Cov}(Y_3) \n \\end{smallmatrix}\\right]\n =\n \\left[\\begin{smallmatrix}\n \\mathrm{Cov}(Y_1) & 0 & \\mathrm{Cov}(Y_1,Y_3)\\\\\n 0 & \\mathrm{Cov}(Y_2) & \\mathrm{Cov}(Y_2,Y_3)\\\\\n \\mathrm{Cov}(Y_3,Y_1) & \\mathrm{Cov}(Y_3,Y_2) &\\mathrm{Cov}(Y_3) \n \\end{smallmatrix}\\right].\n\\end{equation*}\n\nIt may potentially be fruitful to consider a general theory of maps that preserve the convex cone of covariances compatible with a given causal structure.\n\n\n\\section{\\label{SecUniversalFeatureMaps}Universal feature maps for finite categorical variables} As the reader may have realized, the choice of feature maps $Y^{(m)}$ may affect the outcome of the semidefinite test. In other words, even if we find a particular setup that is compatible with the given bipartite DAG, it may be the case that another assignment of the vectors $Y_{m}$ could yield a violation; thus potentially suggesting that we ideally should test an infinite number of choices. However, in the case of observable variables with only finite number of outcomes, we shall here see that one can make a single test, based on a sufficiently `powerful' choice of feature maps. Suppose that the variables $O_m$ can only take a finite number of outcomes $o_1^m,\\ldots, o^m_{d_m}$. An arbitrary assignment of a feature map would correspond to a collection of vectors $y^m_1,\\ldots, y^m_{d_m}\\in \\mathcal{V}_m$ for some vector space $\\mathcal{V}_m$. Now suppose that we make the additional restriction that $y^m_1,\\ldots, y^m_{d_m}$ are linearly independent, and that $\\dim(\\mathcal{V}_m) = d_m$. \nSuppose that we have some other arbitrary assignment of feature map $\\tilde{Y}_m$ given by a collection of vectors $\\tilde{y}^m_1,\\ldots, \\tilde{y}^m_{d_m}\\in \\tilde{\\mathcal{V}}_m$ for some vector space $\\tilde{\\mathcal{V}}_m$ (without any requirement of linear independence). \nOne can realize that it is always possible to find a linear map $\\phi_m:\\mathcal{V}_m\\rightarrow \\tilde{\\mathcal{V}}_m$ such that $\\phi_m y^m_j = \\tilde{y}^m_j$, and thus $\\phi_m Y_m = \\tilde{Y}_m$. \nTo see this, one can note that since $y^m_1,\\ldots, y^m_{d_m}$ is a linearly independent set in a $d_m$-dimensional space, it follows that the Gram matrix $G = [G_{j,j'}]_{j,j' = 1}^{d_m}$ with $G_{j,j'} := (y^m_j,y^m_{j'})$ is invertible (and positive definite). One can confirm that $\\phi_m$ defined by $\\phi_m(v) := \\sum_{jj'}\\tilde{y}^m_j[G^{-1}]_{jj'}(y^m_{j'},v)$ satisfies $\\phi_m y^m_j = \\tilde{y}^m_j$. In other words, a feature map with linearly independent components is `universal' in the sense that we can generate all other feature maps on all other vector spaces, and it is moreover sufficient to do this via linear transformations.\n\nFor a collection of universal feature maps $Y_1,\\ldots,Y_M$ assigned to $O_1,\\ldots, O_M$, we can reach all other feature maps $\\tilde{Y}_1,\\ldots, \\tilde{Y}_M$, by linear operations $\\tilde{Y}_m = \\phi_m Y_m$. Moreover, the covariance matrix $\\mathrm{Cov}(Y)$ for $Y = \\sum_{m=1}^{M}Y_m$ and the covariance matrix $\\mathrm{Cov}(\\tilde{Y})$ for $\\tilde{Y} = \\sum_{m=1}^{M}\\tilde{Y}_m$ are related by $\\mathrm{Cov}(\\tilde{Y}) = \\phi\\mathrm{Cov}(Y)\\phi^{\\dagger}$ for $\\phi := \\sum_{m=1}^{M}\\phi_m$. One can realize that if $\\mathrm{Cov}(Y)$ satisfies the decomposition in Proposition \\ref{PropDecomposition} for a given bipartite DAG, then $\\mathrm{Cov}(\\tilde{Y})$ also satisfies the decomposition. We can conclude that it is sufficient to apply the semidefinite test for a single collection of feature maps, where each of these have linearly independent components. (A convenient choice would be mappings to orthonormal bases.)\n \nIt is conceivable that a similar construction would hold for variables with a countably infinite number of outcomes, and it is an interesting question if one in some sense could make `universal' assignments of feature maps also in the case of a continuum. However, we shall not consider these issues in this investigation, but leave them as open questions. \n\n\\section{\\label{SecMonotonicity} Monotonicity under local operations}\n\nSuppose that we would process each observable variable in a collection $O_1,\\ldots, O_M$ `locally'. In other words, the output $\\tilde{O}_m$ is a (possibly random) function only of $O_m$. If we restrict ourselves to discrete random variables, then this type of mapping from an input distribution $P^{M}$ of the $O_1,\\ldots, O_M$, to the output distribution $\\tilde{P}^{M}$ of $\\tilde{O}_1,\\ldots, \\tilde{O}_M$ can be written\n\\begin{equation}\n\\label{fbdjkfbdk}\n\\tilde{P}^{M}(\\tilde{x}_1,\\ldots,\\tilde{x}_M) := \\sum_{x_1\\ldots,x_M}P^1(\\tilde{x}_1|x_1)\\cdots P^M(\\tilde{x}_M|x_M)P^{M}(x_1,\\ldots,x_M),\n\\end{equation}\nwhere all $P^{m}(\\tilde{x}_m|x_m)$ are conditional distributions.\n From this construction it is clear that if a distribution $P^{M}$ is compatible with the given bipartite DAG, then the resulting distribution $\\tilde{P}^{M}$ on $\\tilde{O}_1,\\ldots,\\tilde{O}_M$ will also be compatible with the very same DAG. In other words, compatibility with a given bipartite DAG is in this sense a monotone with respect to local operations.\n\nThere is a priori no reason to expect that relaxations of the compatibility problem would satisfy this monotonicity. However, here we show that this property is respected by the semidefinite test, if the latter is based on universal feature maps (in the sense of the previous section). The fact that universality is needed can be seen from the following trivial special case. We assign feature maps $Y_m$ to $O_m$, and $\\tilde{Y}_m$ to $\\tilde{O}_m$. In principle we can for each $m$ choose all components of $Y_m$ to be identical, thus resulting in a zero covariance matrix that trivially satisfies all decompositions, while $\\tilde{Y}_m$ may still result in a violation. By assuming that all the feature maps $Y_m$ are universal, we shall in the following see that monotonicity is guaranteed.\n\nLet us first focus on the transformation of a single observable variable $O_m$ to $\\tilde{O}_m$, and let us assume that $Y_m$ has the linearly independent components $y_1^m,\\ldots, y^m_K$, with Gram matrix $G = [G_{x,x'}]_{x,x' =1}^{K}$ with $G_{x,x'} = (y_{x}^m,y_{x'}^m)$, in a $K$-dimensional vector space $\\mathcal{V}_m$. $G$ is invertible since $y_1^m,\\ldots, y^m_K$ are linearly independent. Let $\\tilde{y}^m_1,\\ldots,\\tilde{y}^m_L$ be the components of $\\tilde{Y}_m$ in $\\tilde{\\mathcal{V}}_m$. (If $L = K$ we can of course choose $\\tilde{y}^m := y^m$ as a special case.)\nDefine $\\psi_m(v) := \\sum_{\\tilde{x},x',x''}\\tilde{y}_{\\tilde{x}}P^m(\\tilde{x}|x')[G^{-1}]_{x',x''}(y_{x''},v)$. (Here and in the following we omit the superscript `$m$' on the vectors $y$ for notational convenience.)\nOne can confirm that $E(\\tilde{Y}_m) = \\psi_m\\big(E(Y_m)\\big)$, and thus with $\\psi =\\sum_{m}\\psi_m$ we get $E(\\tilde{Y}) = \\psi\\big(E(Y)\\big)$. \n\n\n\n\nIt may be very tempting to assume that $\\mathrm{Cov}(\\tilde{Y})$ would be equal to $\\psi\\mathrm{Cov}(Y)\\psi^{\\dagger}$. However, this is generally \\emph{not} the case. The off-diagonal blocks for $m\\neq m'$ satisfy $\\mathrm{Cov}(\\tilde{Y}_m,\\tilde{Y}_{m'}) = \\psi_m\\mathrm{Cov}(Y_m,Y_{m'})\\psi_{m'}^{\\dagger}$. \n\n\n\n\nHowever, for the diagonal blocks it is the case that \n\\begin{equation}\n\\label{ndklvald}\n\\begin{split}\n\\mathrm{Cov}(\\tilde{Y}_m) = & \\psi_m\\mathrm{Cov}(Y_m)\\psi^{\\dagger}_m + W_m,\\\\ \nW_m := & \n\\sum_{\\tilde{x},x}\\tilde{y}_{\\tilde{x}}{\\tilde{y}_{\\tilde{x}}}^{\\dagger}P^m( \\tilde{x}|x)P(O_m = x)\n-\\sum_{\\tilde{x},\\tilde{x}',x}\\tilde{y}_{\\tilde{x}}\\tilde{y}_{\\tilde{x}'}^{\\dagger}P^m(\\tilde{x}|x)P^m(\\tilde{x}'|x)P(O_m = x).\n\\end{split}\n\\end{equation}\nOne can note that each `correction term' $W_m$ is supported only on the subspace $\\tilde{\\mathcal{V}}_m$, and one can moreover show that $W_m\\geq 0$. To see the latter, let $c\\in\\tilde{\\mathcal{V}}_m$, and define $z_{\\tilde{x}} = (c,\\tilde{y}_{\\tilde{x}})$. Then\n\\begin{equation*}\n(c,W_mc) = \\sum_{x}P(O_m = x)\\Big( \\sum_{\\tilde{x}}|z_{\\tilde{x}}|^2P^m(\\tilde{x}|x) -\\Big|\\sum_{\\tilde{x}} z_{\\tilde{x}} P^m(\\tilde{x}|x)\\Big|^2\\Big) = \\sum_{x,\\tilde{x}}P(O_m = x)P^m(\\tilde{x}|x)\\Big| z_{\\tilde{x}} - \\sum_{\\tilde{x}'}P^m(\\tilde{x}'|x)z_{\\tilde{x}'} \\Big|^2\\geq 0.\n\\end{equation*}\n\n\n\n\n\nIf $\\mathrm{Cov}(Y)$ satisfies the decomposition (\\ref{poeto}) in Proposition \\ref{PropDecomposition} for some bipartite DAG, then one can confirm that $\\psi\\mathrm{Cov}(Y)\\psi^{\\dagger}$ also satisfies the corresponding decomposition with respect to the subspaces $\\{\\tilde{\\mathcal{V}}_m\\}_m$. Moreover, since the correction terms $W_m$ are positive semidefinite and block-diagonal with respect to these subspaces, it follows that $\\mathrm{Cov}(\\tilde{Y}) = \\psi\\mathrm{Cov}(Y)\\psi^{\\dagger} + \\sum_{m}W_m$ also satisfies the decomposition.\n We can thus conclude that if the initial feature maps $Y_1,\\ldots, Y_M$ are universal, then the test is monotonous with respect to local operations. \n\n\n\n\nAs a final remark one may note that in the special case that all $P^{m}(\\tilde{x}|x)$ correspond to deterministic mappings, i.e., when the output $\\tilde{x}$ is a (deterministic) function of the input $x$, then $P^{m}(\\tilde{x}|x)P^{m}(\\tilde{x}'|x) = \\delta_{\\tilde{x},\\tilde{x}'}P^{m}(\\tilde{x}|x)$, and (\\ref{ndklvald}) results in $W_m = 0$, which yields $\\mathrm{Cov}(\\tilde{Y}) = \\psi\\mathrm{Cov}(Y)\\psi^{\\dagger}$. Linear transformations $\\phi^m:\\mathcal{V}_m\\rightarrow \\tilde{\\mathcal{V}}_m$ all result in mappings $\\tilde{Y}_m = \\phi^m(Y_m)$ that belong to this deterministic special case (presuming that the maps $\\phi^m$ themselves are not random variables) where we let $P^{m}(\\tilde{x}|x') = \\delta_{\\tilde{x},x'}$ and $\\tilde{y}^m_{\\tilde{x}} = \\phi^{m}(y^m_{\\tilde{x}})$, thus leading to $\\mathrm{Cov}(\\phi Y) = \\phi\\mathrm{Cov}(Y)\\phi^{\\dagger}$ (cf.~the isomorphisms in (\\ref{nlkalkn}), or the maps $\\phi$ used in section \\ref{SecUniversalFeatureMaps}).\n\n\n\n\n\\section{\\label{SecMonotoneFamily} A monotone family of distributions}\nHere we shall consider a specific family of multi-partite distributions that is monotone in the sense of the previous section, for which the analysis of the semidefinite decomposition simplifies. We shall in particular consider the case of the triangular scenario in figure \\ref{FigTriangle}, which turns out to be convenient for the comparison with the entropic tests, which we consider in section \\ref{SecComparison}.\n \n\\subsection{\\label{SecDefiningFamily}Defining the family}\nSuppose that we have a collection of variables, each of which has $D\\geq 2$ possible outcomes.\nIn equation (\\ref{fbdjkfbdk}) \nwe described local operations transforming an initial distribution $P^{M}$. For the local operations we do in this case choose\n\\begin{equation}\n\\label{localtransf}\nP_p(\\tilde{x}|x) := (1-p)\\delta_{\\tilde{x},x} + p\\frac{1}{D}.\n\\end{equation}\nHence, on each variable we (independently) apply the same type of process, where with probability $p$ we replace the input with a uniformly distributed output, and with probability $1-p$ leave the input intact. Here we choose the input distribution to be $ P^{M}(x_1,\\ldots,x_M) = \\delta_{x_1,\\ldots,x_M}\/D$, where the generalized Kronecker delta is such that $\\delta_{x_1,\\ldots,x_M} = 1$ if $x_1 = \\cdots =x_M$, while zero otherwise. Hence, $P^{M}(x_1,\\ldots,x_M) $ describes $M$ perfectly correlated variables.\nBy applying (\\ref{fbdjkfbdk}) with the local operations (\\ref{localtransf}) we thus obtain a new global distribution \n\\begin{equation}\n\\label{DefPMNp}\n\\tilde{P}^{M:D}_p(\\tilde{x}_1,\\ldots,\\tilde{x}_M) := \\frac{1}{D}\\sum_{x_1,\\ldots,x_M}P_p(\\tilde{x}_1|x_1)\\cdots P_p(\\tilde{x}_M|x_M)\\delta_{x_1,\\ldots,x_M},\n\\end{equation}\nwhere we have added the extra superscript $D$ to indicate the alphabet size of the local random variables.\n By construction, this distribution is permutation symmetric over all the variables.\nMoreover, one can confirm that all mono-, bi-, and higher-partite margins of $\\tilde{P}^{M:D}_p$ are independent of how many parties $M$ the total distribution $\\tilde{P}^{M:D}_p$ involves. For example, the bipartite margin of $\\tilde{P}^{M:D}_p$ is equal to $\\tilde{P}^{2:D}_p$. Generally, for $M' < M$ it is the case that \n\\begin{equation}\n\\label{MarginalSelfSimilarity}\n\\tilde{P}^{M':D}_p(\\tilde{x}_1,\\ldots,\\tilde{x}_{M'}) = \\sum_{\\tilde{x}_{M'+1},\\ldots,\\tilde{x}_{M}}\\tilde{P}^{M:D}_p(\\tilde{x}_1,\\ldots,\\tilde{x}_M).\n\\end{equation}\nHence, every margin of every family member is another family member.\n\n\nSince $\\tilde{P}_{1}^{M:D}$ is a product distribution over all the observable variables, it is compatible with every bipartite DAG, while $\\tilde{P}_{0}^{M:D}$ is perfectly correlated, and thus would only be compatible with bipartite DAGs where some latent variable has edges to all observable variables. \nOne can note that the local operations in (\\ref{localtransf}) are such that if $1\\geq p'\\geq p\\geq 0$, then there exists a $1\\geq q\\geq 0$ such that\n\\begin{equation}\n\\label{semigroup}\nP_{p'}(\\tilde{x}|x) = \\sum_{x'}P_{q}(\\tilde{x}|x')P_{p}(x'|x).\n\\end{equation}\n(Any $1\\geq q\\geq 0$ is a valid choice if $p=1$, while $q = (p'-p)\/(1-p)$ if $1> p\\geq 0$.)\nConsequently, if $p'\\geq p$, then $\\tilde{P}^{M:D}_{p'}$ can be generated from $\\tilde{P}^{M:D}_p$ by local operations. \nBy the reasoning in section \\ref{SecMonotonicity} it thus follows that there is some value $p^{*}$ where $\\tilde{P}_{p}^{M:D}$ switches from being incompatible to being compatible with the given bipartite DAG (and it cannot switch back again for higher values of $p$). From section \\ref{SecMonotonicity} we also know that the semidefinite test also has this monotonic behavior if we choose universal feature maps, although the switch may occur at a lower value of $p$.\n\n\n\n\n\\subsection{\\label{SecWithinFamily}Within the family: the existence of a semidefinite decomposition is independent of the local alphabet size}\nHere we show that the semidefinite test takes a particularly simple form for the family $\\tilde{P}^{M:D}_{p}$. In essence we show that the test can be reduced to a test on an $M\\times M$ matrix that only depends on $p$, but not on the local alphabet size $D$. A similar result was obtained in (section 4.5 of) \\cite{VonPrillwitz15MasterThesis}, for the operator inequalities described in section \\ref{SecOperatorInequalities}, but for distributions of the type $v\\delta_{x_1,\\ldots,x_M}\/D-(1-v)\/D^2$, while we here consider the family $\\tilde{P}_{p}^{M:D}$ defined by (\\ref{DefPMNp}).\n\n\nSuppose that we have an $M$-partite distribution $\\tilde{P}^{M:D}_{p}$. We know from the previous section that this distribution is permutation symmetric, and in particular we know from (\\ref{MarginalSelfSimilarity}) that all bipartite marginal distributions are of the form $\\tilde{P}^{2:D}_{p}$, and all mono-partite marginals are of the form $\\tilde{P}^{1:D}_p$. One can moreover confirm that\n\\begin{equation}\n\\label{BinaryAndMono}\n\\tilde{P}^{2:D}_{p}(\\tilde{x},\\tilde{x}') = (1-p)^2\\frac{1}{D}\\delta_{\\tilde{x},\\tilde{x}'}+ p(2-p)\\frac{1}{D^2},\\quad \\tilde{P}^{1:D}_p(\\tilde{x}_1) = \\frac{1}{D}.\n\\end{equation}\n\n\nIn order to construct a covariance matrix, we here assume feature maps $Y_1,\\ldots, Y_M$ that have orthonormal components (i.e., feature map $Y_m$ maps the set of possible outcomes of the $m$th random variable to an orthonormal basis of $\\mathcal{V}_m$, where $\\dim(\\mathcal{V}_m) = D$). Hence, the total space $\\mathcal{V} = \\mathcal{V}_1\\oplus\\cdots\\oplus\\mathcal{V}_M$ is $DM$-dimensional, and we can write it as a tensor product $\\mathcal{V} = \\mathcal{V}^D\\otimes \\mathcal{V}^M$ of a $D$-dimensional space $\\mathcal{V}^D$ and an $M$-dimensional space $\\mathcal{V}^M$. By choosing an orthonormal basis $\\{e_m\\}_{m=1}^{M}$ of $\\mathcal{V}^M$, we can identify $\\mathcal{V}_m = \\mathcal{V}^D\\otimes{\\mathrm{Sp}}\\{e_m\\}$. In section \\ref{SecObsLat} we defined the projectors $P_m$ onto the subspaces $\\mathcal{V}_m$, and we can write these projectors as\n\\begin{equation}\n\\label{anvldfadfn}\nP_m = \\hat{1}_D\\otimes \\hat{e}_m,\n\\end{equation} \nwhere $\\hat{1}_D$ is the identity operator on $\\mathcal{V}^D$, and $\\hat{e}_m$ is the projector onto $e_m$.\n\n\nThe covariance matrix $\\mathrm{Cov}(Y)$ for the random variable $Y = Y_1 +\\cdots +Y_M$ is an $MD\\times MD$ matrix and takes a particularly simple form\n\\begin{equation}\n\\label{nadjkfba}\n\\mathrm{Cov}(Y) = \\left[\\begin{matrix}\n Q & (1-p)^2Q & \\cdots & (1-p)^2Q\\\\\n (1-p)^2Q & Q & \\ddots & \\vdots\\\\\n\\vdots & \\ddots & Q & (1-p)^2 Q \\\\\n(1-p)^2Q & \\cdots & (1-p)^2Q & Q\n\\end{matrix}\\right] = \\frac{1}{D}Q\\otimes C(p), \n\\end{equation}\nwhere we define the $M\\times M$ matrix\n\\begin{equation}\n\\label{vnoekfnbf}\nC(p) := \\left[\\begin{matrix} \n1 & (1-p)^2 & \\cdots & (1-p)^2 \\\\\n(1-p)^2 & 1 & \\ddots & \\vdots\\\\\n\\vdots & \\ddots & & (1-p)^2\\\\\n(1-p)^2 & \\cdots & (1-p)^2 & 1\n\\end{matrix}\\right]\n\\end{equation}\nand the $D\\times D$ matrix $Q$ with elements\n\\begin{equation}\nQ_{\\tilde{x},\\tilde{x}'} \n:= \\delta_{\\tilde{x},\\tilde{x}'} -\\frac{1}{D},\\quad \\tilde{x},\\tilde{x}' = 1,\\ldots, D.\n\\end{equation}\nNote that we can write $Q = \\hat{1}_D - cc^{\\dagger}$, where $c = (1,\\ldots,1)^{\\dagger}\/\\sqrt{D}\\in\\mathcal{V}^{D}$ is normalized. Hence, $Q$ is the projector onto the $(D-1)$-dimensional subspace of $\\mathcal{V}^D$ that is the orthogonal complement to the one-dimensional subspace spanned by $c$. From $Q$ being a projector, it also follows that $Q\\geq 0$.\n\n\n\n\nSuppose now that we have a particular bipartite DAG $B$ with observable variables $O_1,\\ldots,O_M$ and latent variables $L_1,\\ldots, L_N$. As we recall from section \\ref{SecDecBipartDAGs}, the semidefinite test is characterized via the projectors $P^{(n)} = \\sum_{m\\in \\mathrm{ch}(L_n)}P_m$ as\n\\begin{equation}\n\\label{djfvna}\n\\mathrm{Cov}(Y) = R + \\sum_{n=1}^NC_n,\\quad P^{(n)}C_{n}P^{(n)} = C_n,\\quad C_n\\geq 0, \\quad \\sum_{m=1}^{M}P_mRP_m = R,\\quad R\\geq 0.\n\\end{equation}\nIn the present case, we can write these projectors as\n\\begin{equation}\n\\label{aklndfkal}\n P^{(n)} = I_D\\otimes \\tilde{P}^{(n)},\\quad \\tilde{P}^{(n)} = \\sum_{m\\in \\mathrm{ch}(L_n)}\\tilde{P}_m,\n\\end{equation}\nwith $\\tilde{P}_m$ as in (\\ref{anvldfadfn}).\n\nFor each fixed number of observable variables $M$, local alphabet size $D$, and given bipartite DAG $B$, we know that the family $\\tilde{P}^{M:D}_p$ is monotone with respect to $p$, in the sense that the covariance matrix $\\mathrm{Cov}(Y)$ satisfies the semidefinite decomposition for all $p$ beyond a certain threshold value, while it is violated for all values below. The following proposition shows that this threshold is independent of $D$, and that it can be determined via simplified decomposition of the matrix $C(p)$.\n\n\n\n\\begin{Proposition}\n\\label{PropCovDec}\nLet $\\mathrm{Cov}(Y)$ be the covariance matrix, for feature maps with orthonormal components, corresponding to the distribution $\\tilde{P}^{M:D}_p$, as defined in (\\ref{DefPMNp}), for $M$ observable variables, and local alphabet size $D\\geq 2$. \nFor each value $1\\geq p \\geq 0$ it is the case that $\\mathrm{Cov}(Y)$ satisfies the semidefinite decomposition (\\ref{djfvna}) with respect to a given bipartite DAG $B$, if and only if $C(p)$, defined in (\\ref{vnoekfnbf}), satisfies the decomposition \n\\begin{equation}\n\\label{mjmh}\nC(p) = \\tilde{R} + \\sum_{n=1}^{N}\\tilde{C}_n,\\quad \\tilde{P}^{(n)}C_n\\tilde{P}^{(n)},\\quad \\tilde{C}_n\\geq 0,\\quad \\sum_{m=1}^{N}\\tilde{P}_m\\tilde{R}\\tilde{P}_m = \\tilde{R}, \\quad \\tilde{R}\\geq 0. \n\\end{equation}\nMoreover, there exists a number $1\\geq \\overline{p}(B) \\geq 0$ that does not depend on $D$, such that $\\mathrm{Cov}(Y)$ satisfies (\\ref{djfvna}) and $C(p)$ satisfies (\\ref{mjmh}) for all $p >\\overline{p}(B)$, while $\\mathrm{Cov}(Y)$ and $C(p)$ do not satisfy the decompositions for $p <\\overline{p}(B)$.\n\\end{Proposition}\n\n\n\\begin{proof}\nFirst we shall show that if $C(p)$ satisfies the decomposition, then $\\mathrm{Cov}(Y)$ also satisfies the decomposition.\nLet $p$ be any $1\\geq p\\geq 0$ such that there exists a semidefinite decomposition of $C(p)$ as in (\\ref{vnoekfnbf}).\nEquation (\\ref{mjmh}) provides $\\tilde{R}$ and $\\tilde{C}_n$. Define $R := Q\\otimes\\tilde{R}\/D$ and $C_n := Q\\otimes \\tilde{C}_n\/D$. Thus defined, it follows that\n\\begin{equation*}\nR + \\sum_{n}C_n = \\frac{1}{D}Q\\otimes (\\tilde{R} + \\sum_{n}\\tilde{C}_n) = \\frac{1}{D}Q\\otimes C(p) = \\textrm{Cov}(Y). \n\\end{equation*}\nMoreover, by the conditions in (\\ref{mjmh}) and the observations in (\\ref{aklndfkal}), it follows that \n\\begin{equation*}\nP^{(n)}C_nP^{(n)} = [\\hat{1}_D\\otimes \\tilde{P}^{(n)}][\\frac{1}{D}Q\\otimes \\tilde{C}_n] [\\hat{1}_D\\otimes \\tilde{P}^{(n)}] = C_n,\n\\end{equation*}\n and $C_n = Q\\otimes \\tilde{C}_n\/D\\geq 0$.\n\n Furthermore, by the conditions in (\\ref{mjmh}) and (\\ref{anvldfadfn}), it follows that \n\\begin{equation*}\n\\sum_m P_mR P_m = \\sum_{m}[\\hat{1}_D\\otimes \\tilde{P}_m][\\frac{1}{D}Q\\otimes\\tilde{R}][\\hat{1}_D\\otimes \\tilde{P}_m] = R,\n\\end{equation*}\n and $R = Q\\otimes\\tilde{R}\/D\\geq 0$. Hence, this procedure produces a valid semidefinite decomposition of $\\mathrm{Cov}(Y)$. \nHence, for every $p$ for which $C(p)$ has a valid decomposition, it follows that $\\mathrm{Cov}(Y)$ also has a valid decomposition. \n\nNext we prove the opposite implication, namely that the existence of a decomposition of $\\mathrm{Cov}(Y)$ implies a decomposition of $C(p)$. Let us thus assume that there is a $1\\geq p\\geq 0$ for which there exists a decomposition of $\\mathrm{Cov}(Y)$ as in (\\ref{djfvna}). Equation (\\ref{djfvna}) provides $R$ and $C_n$. Let $v\\in\\mathcal{V}^D$ be normalized, and such that $Qv = v$. Such a $v$ always exists, since $Q$ is a projector onto a $(D-1)$-dimensional subspace of $\\mathcal{V}^D$ and $D\\geq 2$. Define\n$\\tilde{R}:=Dv^{\\dagger}Rv$ and $\\tilde{C}_n := Dv^{\\dagger}C_nv$ (where one should keep in mind that e.g.~$v^{\\dagger}Rv$ is an operator on $\\mathcal{V}^M$, since $v\\in\\mathcal{V}^D$). Hence, by (\\ref{djfvna}) and (\\ref{nadjkfba})\n\\begin{equation*}\n\\begin{split}\n\\tilde{R} +\\sum_n\\tilde{C}_n = & Dv^{\\dagger}(R +\\sum_nC_n)v = Dv^{\\dagger}\\mathrm{Cov}(Y)v= Dv^{\\dagger}[\\frac{1}{D}Q\\otimes C(p)]v = v^{\\dagger}Qv C(p) = C(p).\n\\end{split}\n\\end{equation*}\nMoreover, by the conditions in (\\ref{djfvna}) and the observations in (\\ref{aklndfkal}), it follows that \n\\begin{equation*} \n\\begin{split}\n\\tilde{P}^{(n)}\\tilde{C}_n\\tilde{P}^{(n)} = & \\tilde{P}^{(n)} Dv^{\\dagger}C_nv \\tilde{P}^{(n)} = Dv^{\\dagger}[\\hat{1}_D\\otimes\\tilde{P}^{(n)}] C_n[\\hat{1}_D\\otimes\\tilde{P}^{(n)}]v = Dv^{\\dagger}P^{(n)} C_n P^{(n)}v = Dv^{\\dagger} C_nv = \\tilde{C}_n,\n\\end{split}\n\\end{equation*}\nand $\\tilde{C}_n := Dv^{\\dagger}C_nv\\geq 0$.\nFurthermore, (\\ref{djfvna}) and (\\ref{anvldfadfn}) yields\n\\begin{equation*} \n\\begin{split}\n\\sum_m\\tilde{P}_m\\tilde{R}\\tilde{P}M = & \\sum_m \\tilde{P}_m Dv^{\\dagger}Rv \\tilde{P}_m = \\sum_m Dv^{\\dagger}[\\hat{1}_D\\otimes\\tilde{P}_{m}] R[\\hat{1}_D\\otimes\\tilde{P}_m]v = Dv^{\\dagger}(\\sum_m P_{m}R P_m)v = Dv^{\\dagger}Rv = \\tilde{R},\n\\end{split}\n\\end{equation*}\nand $\\tilde{R}:=Dv^{\\dagger}Rv\\geq 0$. Hence, we can conclude that the decomposition of $\\mathrm{Cov}(Y)$ induces a valid decomposition of $C(p)$ as in (\\ref{mjmh}).\n\nWe know from section \\ref{SecDefiningFamily} that the family $\\tilde{P}^{M:D}_p$ is monotone, in the sense that $\\mathrm{Cov}(Y)$ (since it is based on orthonormal feature maps) satisfies the semidefinite decomposition for all $p$ beyond a certain threshold value, which we can call $\\overline{p}(B)$, while violating the decomposition for all $p$ below $\\overline{p}(B)$. From the above equivalence we conclude that the same transition is valid for $C(p)$ with respect to the decomposition in (\\ref{mjmh}).\n\\end{proof}\n\n\n\n\n\\subsection{\\label{SecTripartiteMonotone} Compatibility with the triangular DAG}\nHere we consider the tripartite case and determine the value of $p$ where $\\tilde{P}^{3:D}_p$ switches from not satisfying the semidefinite decomposition, to satisfying it, with respect to the triangular scenario in figure \\ref{FigTriangle}.\nThe family of distributions $\\tilde{P}^{M:D}_p$, defined in (\\ref{DefPMNp}), does in the tripartite case take the form\n\\begin{equation}\n\\label{ndvklanlkv}\n\\begin{split}\n\\tilde{P}^{3:D}_p(\\tilde{x}_1,\\tilde{x}_2,\\tilde{x}_3) \n= & (1-p)^3\\frac{1}{D}\\delta_{\\tilde{x}_1,\\tilde{x}_2,\\tilde{x}_3}\\\\\n& + p(1-p)^2\\frac{1}{D^2}[\\delta_{\\tilde{x}_1,\\tilde{x}_2} +\\delta_{\\tilde{x}_1,\\tilde{x}_3} + \\delta_{\\tilde{x}_2,\\tilde{x}_3}]\\\\\n& + p^2(3-2p)\\frac{1}{D^3},\n\\end{split}\n\\end{equation}\nand the matrix $C(p)$ and the projectors $ \\tilde{P}^{(1)}$, $ \\tilde{P}^{(2)}$, and $\\tilde{P}^{(3)}$ become\n\\begin{equation*}\nC(p) = \\left[\\begin{matrix}\n1 & (1-p)^2 & (1-p)^2 \\\\\n(1-p)^2 & 1 & (1-p)^2 \\\\\n(1-p)^2 & (1-p)^2 & 1\n\\end{matrix}\\right],\n\\quad \\tilde{P}^{(1)} = \\left[\\begin{matrix} 0 & 0 & 0\\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \n\\end{matrix}\\right], \\quad \\tilde{P}^{(2)} = \\left[\\begin{matrix} 1 & 0 & 0\\\\\n0 & 0 & 0 \\\\\n0 & 0 & 1 \n\\end{matrix}\\right],\\quad \\tilde{P}^{(3)} = \\left[\\begin{matrix} 1 & 0 & 0\\\\\n0 & 1 & 0 \\\\\n0 & 0 & 0\n\\end{matrix}\\right].\n\\end{equation*}\n\n\n\n\n\nAs a corollary of Proposition \\ref{PropCovDec} we here determine the `transition point' $\\overline{p}(B)$ for the family $\\tilde{P}^{M:D}_p$ in the triangular scenario.\n\n\\begin{Lemma}\n\\label{bvdsaajkvb}\nFor $p\\in \\mathbb{R}$ it is the case that \n$\\left[\\begin{smallmatrix}\n\\frac{1}{2} & (1-p)^2\\\\\n(1-p)^2 & \\frac{1}{2}\n\\end{smallmatrix}\\right]\\geq 0$ $\\Leftrightarrow$ $1 - \\frac{1}{\\sqrt{2}}\\leq p \\leq 1+\\frac{1}{\\sqrt{2}}$.\n\\end{Lemma}\n\n\n\\begin{Lemma}\n\\label{nklknbj}\nLet $a,b,r\\in \\mathbb{C}$, then \n$\\left[\\begin{smallmatrix}\na & r \\\\\nr & b\n\\end{smallmatrix}\\right]\\geq 0$ $\\Leftrightarrow$ $\\left[\\begin{smallmatrix}\nb & r \\\\\nr & a\n\\end{smallmatrix}\\right]\\geq 0$.\n\\end{Lemma}\n\n\n\\begin{Corollary}\n\\label{PropTransition}\nFor the family $\\tilde{P}^{3:D}_p$ in equation (\\ref{ndvklanlkv}), and for feature maps with orthonormal components, the covariance matrix $\\mathrm{Cov}(Y)$ has a semidefinite decomposition with respect to the triangular bipartite DAG $B$ in figure \\ref{FigTriangle}, if and only if $1-\\frac{1}{\\sqrt{2}}\\leq p\\leq 1$. Hence, $\\overline{p}(B) = 1-1\/\\sqrt{2}$.\n\\end{Corollary}\nOne may note that $\\tilde{P}^{3:D}_p$ has a semidefinite decomposition also in the case $p = 1-1\/\\sqrt{2}$, i.e., at the transition point. Proposition \\ref{PropCovDec} does strictly speaking leave open the nature of the transition point \\emph{per se}.\n\\begin{proof}\nBy Proposition \\ref{PropCovDec} we know that it is sufficient to determine the $p$ for which $C(p)$ decomposes as in (\\ref{mjmh}).\nDue to Lemma \\ref{bvdsaajkvb} it follows that \n\\begin{equation*}\n\\tilde{R} = 0,\\,\\, \\tilde{C}_1 = \\left[\\begin{smallmatrix}\n0 & 0 &0\\\\\n0 & \\frac{1}{2} & (1-p)^2\\\\\n0 & (1-p)^2 & \\frac{1}{2}\n\\end{smallmatrix}\\right],\\,\\, \\tilde{C}_2 = \\left[\\begin{smallmatrix}\n \\frac{1}{2} & 0 & (1-p)^2\\\\\n0 & 0 & 0\\\\\n (1-p)^2 & 0 & \\frac{1}{2}\n\\end{smallmatrix}\\right],\\,\\, \\tilde{C}_3 = \\left[\\begin{smallmatrix}\n \\frac{1}{2} & (1-p)^2 & 0\\\\\n (1-p)^2 & \\frac{1}{2} & 0\\\\\n0 & 0 & 0\n\\end{smallmatrix}\\right]\n\\end{equation*}\nsatisfy the decomposition (\\ref{mjmh}) for all $1 - 1\/\\sqrt{2}\\leq p \\leq 1$.\nHowever, this does not exclude the possibility that there exists some other decomposition that yields a smaller $p$.\n\n\nSuppose that $0 \\leq p'< 1-1\/\\sqrt{2}$.\nBy the structure of the triangular DAG, it follows that the most general decomposition of the form (\\ref{mjmh}) possible (incorporating the diagonal matrix $\\tilde{R}$ into $\\tilde{C}_1$, $\\tilde{C}_2$, and $\\tilde{C}_3$) can be written $C(p) = \\tilde{C}_1 +\\tilde{C}_2 +\\tilde{C}_3$, where\n\\begin{equation*}\n\\tilde{C}_1 = \\left[\\begin{smallmatrix}\n0 & 0 &0\\\\\n0 & b_2 & (1-p')^2\\\\\n0 & (1-p')^2 & c_1\n\\end{smallmatrix}\\right],\\,\\, \\tilde{C}_2 = \\left[\\begin{smallmatrix}\n a_1 & 0 & (1-p')^2\\\\\n0 & 0 & 0\\\\\n (1-p')^2 & 0 & c_2\n\\end{smallmatrix}\\right],\\,\\, \\tilde{C}_3 = \\left[\\begin{smallmatrix}\n a_2 & (1-p')^2 & 0\\\\\n (1-p')^2 & b_1 & 0\\\\\n0 & 0 & 0\n\\end{smallmatrix}\\right],\n\\end{equation*}\nand where $a_1,a_2,b_1,b_2,c_1,c_2\\geq 0$ and $a_1 +a_2 = 1$, $b_1 +b_2 = 1$, $c_1 + c_2 = 1$. \nBy the assumed semidefiniteness of $\\tilde{C}_1$, $\\tilde{C}_2$, and $\\tilde{C}_3$, it follows that \n\\begin{equation}\n\\label{cghcfgx}\n\\begin{split}\n M_1:= \\left[\\begin{smallmatrix}\na_1 & (1-p')^2 \\\\\n(1-p')^2 & c_2 \n\\end{smallmatrix}\\right] \\geq 0,\\quad M_2:= \\left[\\begin{smallmatrix}\na_2 & (1-p')^2 \\\\\n (1-p')^2 & b_1\\\\\n\\end{smallmatrix}\\right]\\geq 0,\\quad M_3 :=\\left[\\begin{smallmatrix}\n b_2 & (1-p')^2 \\\\\n (1-p')^2 & c_1\\\\\n\\end{smallmatrix}\\right]\\geq 0.\n\\end{split}\n\\end{equation}\n By Lemma \\ref{nklknbj} it follows that (\\ref{cghcfgx}) implies\n\\begin{equation*}\n\\begin{split}\n M_4:= \\left[\\begin{smallmatrix}\nc_2 & (1-p')^2 \\\\\n(1-p')^2 & a_1 \n\\end{smallmatrix}\\right] \\geq 0,\\quad M_5:= \\left[\\begin{smallmatrix}\nb_1 & (1-p')^2 \\\\\n (1-p')^2 & a_2\\\\\n\\end{smallmatrix}\\right]\\geq 0,\\quad M_6 :=\\left[\\begin{smallmatrix}\n c_1 & (1-p')^2 \\\\\n (1-p')^2 & b_2\\\n\\end{smallmatrix}\\right]\\geq 0.\n\\end{split}\n\\end{equation*}\nSince these matrices all are positive semidefinite, it follows that every convex combinations of them is also positive semidefinite. \nThus one can confirm that\n\\begin{equation}\n\\begin{split}\n\\left[\\begin{matrix}\n\\frac{1}{2} & (1-p')^2 \\\\\n(1-p')^2 & \\frac{1}{2}\n\\end{matrix}\\right] = \\frac{1}{6}M_1 + \\frac{1}{6}M_2 + \\frac{1}{6}M_3 +\\frac{1}{6}M_4 + \\frac{1}{6}M_5 +\\frac{1}{6}M_6 \\geq 0.\n\\end{split}\n\\end{equation}\nHowever, the positive semidefiniteness of this matrix is a contradiction to Lemma \\ref{bvdsaajkvb}, since by assumption $p'< 1-1\/\\sqrt{2}$. Hence, $C(p)$ can only have a decomposition as in (\\ref{mjmh}) if $1-1\/\\sqrt{2}\\leq p\\leq 1$. By Proposition \\ref{PropCovDec} it thus follows that $\\mathrm{Cov}(Y)$ satisfies the semidefinite decomposition as in (\\ref{djfvna}) if and only if $1-1\/\\sqrt{2}\\leq p\\leq 1$.\n\n\n\n\n\n\\end{proof}\n\n\n\n\n\n\\section{\\label{SecComparison} Comparison with entropic tests}\nOuter relaxations of the compatibility set corresponding to latent variable structures, based on information theoretic inequalities, have been considered previously \\cite{steudel2015information,Chaves2014,Chaves2014b,weilenmann2016non}. Here we make a numerical comparison of the performance of these entropic tests and the semidefinite test. A basic challenge is that we in practice do not know the true set of compatible distributions. However, since we are dealing with outer approximations, a reasonable approach is to compare how `strict' the tests are, i.e., if one test generally tends to reject more distributions than the other. \n\nGiven the rather radical difference in appearance and functional form between the semidefinite test and tests based on entropy inequalities (described in more detail in the next section) it is far from clear how these tests relate, or if there even is a clear-cut relation in the sense that one would be systematically stronger than the other. An indication can be gained from \\cite{VonPrillwitz15MasterThesis}, where it was found that tests based on operator inequalities, of the type described in section \\ref{SecOperatorInequalities}, appear to be stronger than the entropic ones for small alphabet sizes, but that there seems to be a switchover for larger alphabets (see section 4.5 of \\cite{VonPrillwitz15MasterThesis}). Here we confirm similar trends for the semidefinite test in comparison with the entropic test, where we focus on the `triangular' DAG described in figure \\ref{FigTriangle}. \nIn case of binary variables, we do in section \\ref{SecRandomIsing} make a comparison over an ensemble of randomly constructed distributions. However, our major testbed for these comparisons (in section \\ref{SecComparisonMonotone}) is the family of distributions $\\tilde{P}^{M:D}_p$ introduced in section \\ref{SecMonotoneFamily}.\n\n\\subsection{Entropy inequalities for the triangular DAG}\nWe focus on the triangular DAG in figure \\ref{FigTriangle}, since this has been a rather well investigated scenario with several known entropic inequalities associated with it. For the three observable variables $O_1,O_2,O_3$ we let $H(1) := H(O_1) := -\\sum_j P(O_1 = j)\\log_{2}P(O_1 = j)$ denote the Shannon entropy, and in a similar manner $H(12) := H(O_1,O_2)$, etc, where `$\\log_2$' denotes the base $2$ logarithm. The first inequality (\\ref{ineqE1}) for the triangular scenario was obtained in \\cite{Fritz2012} (see also \\cite{Chaves2014} and \\cite{weilenmann2016non})\n\\begin{equation}\n\\label{ineqE1} E_1 := - H(1)-H(2)-H(3)+H(13) +H(12) \\geq 0.\n\\end{equation}\nThe following two inequalities were derived in \\cite{Chaves2014}\n\\begin{equation}\n\\begin{split}\n\\label{ineqE2}\n E_2 := & -3H(1) -3H(2) -3H(3) + 2H(12) + 2H(13) + 3H(23) -H(123)\\geq 0,\n \\end{split}\n\\end{equation}\n\\begin{equation}\n\\begin{split}\n\\label{ineqE3} E_3 := & -5H(1) - 5H(2) -5H(3) + 4H(12) + 4H(13) + 4H(23) -2H(123)\\geq 0.\n \\end{split}\n\\end{equation}\nFinally, inequalities (\\ref{ineqE4}) to (\\ref{ineqE6}) were obtained in \\cite{weilenmann2016non} \n\\begin{equation}\n\\begin{split}\n \\label{ineqE4} E_4 := & -4H(1) -4H(2)-4H(3) + 3H(12) + 3H(13) + 4H(23) -2H(123)\\geq 0,\\\\\n \\end{split}\n\\end{equation}\n\\begin{equation}\n\\begin{split}\n\\label{ineqE5} E_5 := & -2H(1)-2H(2)-2H(3) + 3H(12) + 3H(13) + 3H(23) -4H(123)\\geq 0,\\\\\n \\end{split}\n\\end{equation}\n\\begin{equation}\n\\begin{split}\n\\label{ineqE6} E_6 := & -8H(1)-8H(2)-8H(3) +7H(12) +7H(13) + 7H(23)-5H(123) \\geq 0. \n \\end{split}\n\\end{equation}\nOne should observe that the expressions in (\\ref{ineqE1}), (\\ref{ineqE2}), and (\\ref{ineqE4}) are not symmetric under permutations of the $O_1,O_2,O_3$, and thus each of these generate two more inequalities. Whenever one of these inequalities is violated we can conclude that the observable distribution cannot originate from the bipartite DAG in figure \\ref{FigTriangle}.\n\nOne may note that all of these entropic inequalities, apart from (\\ref{ineqE1}), depend on the full tripartite distribution, while the semidefinite test only takes into account the mono- and bipartite marginals. One may thus intuitively suspect that the semidefinite test would be at a disadvantage compared to these tripartite entropic tests.\n\n\\subsection{\\label{SecRandomIsing}Rejection rates in random Ising models: The binary case}\nFor a numerical comparison between the entropic and the semidefinite test for the triangular scenario in figure \\ref{FigTriangle}, we assume binary variables $O_1,O_2,O_3\\in\\{-1,1\\}$, and distributions $P(\\overline{x}) := P(O_1 =x_1,O_2=x_2,O_3 = x_3)$, $\\overline{x} := (x_1,x_2,x_3)$ given by an Ising interaction model \\cite{GallavottiStatisticalMechanics,KollerProbabilisticGraphicalModels}\n\\begin{equation}\n\\label{IsingModel}\nP(\\overline{x}) = \\frac{e^{- \\overline{x}^{\\dagger}J\\overline{x}}}{Z},\n\\end{equation}\nwith $Z$ being the normalization constant, and where $J$ is a real $3\\times 3$ matrix. For each single instance of this model we draw the elements of $J$ independently from a Gaussian distribution with zero mean and variance $1$. \n\nFor the semidefinite test we choose (universal) feature maps that associate the outcomes of the random variables to elements of orthonormal bases, thus resulting in a $6\\times 6$ covariance matrix. The semidefinite test was implemented via a semidefinite program that minimizes a constant function, thus effectively testing whether there exist any feasible elements.\n\n For each instance over $10^6$ independent repetitions of the Ising model in (\\ref{IsingModel}) we performed the semidefinite test, as well as tested the entropic inequalities (\\ref{ineqE1}) to (\\ref{ineqE6}) together with all their permutations. \n \n \n The following table gives the approximate fraction of rejections. In the table, $E^{\\cup}_1$ (and analogously for $E^{\\cup}_2$ and $E^{\\cup}_4$) means that we test the inequality in (\\ref{ineqE1}) as well as its two permutations, and we count the fraction of the sample that violates any of these three inequalities, i.e., we take the union of the corresponding rejection regions. The entry `Combined' signifies the fraction of rejections due to violations of at least one of the inequalities (\\ref{ineqE1}) to (\\ref{ineqE6}) or any of their permutations. Finally `Semidefinite' denotes the fraction of rejections for the semidefinite test. \n\n\n\n\n\\begin{equation*}\n\\begin{split}\n& E_1^{\\cup}: 0.57,\\quad E_2^{\\cup}: 0.60, \\quad E_3: 0.54, \\\\\n& E_4^{\\cup}: 0.63,\\quad E_5: 0.40, \\quad E_6: 0.60,\\\\\n& \\textrm{Combined}: 0.64,\\\\\n& \\textrm{Semidefinite}: 0.77\n\\end{split}\n\\end{equation*}\n\n\n\n\n\nSince the fraction of rejections is higher for the semidefinite test than for all the entropic inequalities combined, this suggests that the semidefinite test in some sense has a `larger' region of rejection, and thus would be the stronger test. To get some information on the relation between the two regions of rejections, we checked whether we could find any case where the semidefinite test accepted an instance that had been rejected by some of the entropic inequalities. However, we could find no such case, which suggests that the region of rejection for the collection of entropic inequalities is contained in the region of rejection for the semidefinite test.\n\n\\subsection{\\label{SecComparisonMonotone}Comparison on a monotone family of distributions}\n\nIn section \\ref{SecMonotonicity} we argued that the compatibility of distributions with respect to a given bipartite DAG is monotonous under local operations, and that the semidefinite test also satisfies this property if we use universal feature maps.\nIn section \\ref{SecTripartiteMonotone} we introduced a particular tripartite family of distributions $\\tilde{P}^{3:D}_{p}$ that can be generated from the appropriate maximally correlated distribution by local operations, and where we could show that this family cut the boundary of the semidefinite compatibility region at $p = 1-1\/\\sqrt{2}$. Here we compare the performance of the entropic tests with the semidefinite test on this particular family of distributions. \n\n\\subsubsection{\\label{SecBinary}Binary variables}\n\n\n\n\nWe begin in the case of three binary variables, i.e., each variable can take two possible values. In this case $\\tilde{P}^{3:2}_{p}$ reduces to \n\\begin{equation}\n\\label{tildeP3} \n\\begin{split}\n\\tilde{P}^{3:2}_p(\\tilde{x}_1,\\tilde{x}_2,\\tilde{x}_3) = \\left\\{\\begin{matrix} \n\\frac{1}{8}(4-6p+3p^2),\\quad \\textrm{if}\\quad \\tilde{x}_1 = \\tilde{x}_2 = \\tilde{x}_2,\\\\\n\\frac{1}{8}p(2-p),\\quad \\textrm{otherwise}.\n\\end{matrix}\\right.\n\\end{split}\n\\end{equation}\n\n\n\n\\begin{figure}[h!]\n \\includegraphics[width= 9cm]{Figure6.pdf} \n\\caption{\\label{FigEntropyBinary} {\\bf Entropic versus semidefinite for binary variables.} For three binary variables described by the distribution $\\tilde{P}^{3:2}_{p}$ in equation (\\ref{tildeP3}), we calculate $E_1,\\ldots, E_6$ defined in (\\ref{ineqE1}) to (\\ref{ineqE6}) as functions of the parameter $p$. When one of these functions turns negative, it implies that the distribution $\\tilde{P}^{3:2}_{p}$ is not compatible with the triangular bipartite DAG in figure \\ref{FigTriangle}. Moreover, we determine the $6\\times 6$ covariance matrix with respect to feature maps that assign orthogonal vectors to the outcomes. The red vertical line indicates the value $p = 1-1\/\\sqrt{2} \\approx 0.29$, determined in section \\ref{SecTripartiteMonotone}, below which the semidefinite test rejects the resulting covariance matrix. As one can see, the semidefinite test has the larger region of rejection, and is in this sense the stronger test for this particular binary setup.\n}\n\\end{figure}\n\n\nIn figure \\ref{FigEntropyBinary} we plot $E_1,\\ldots,E_6$ as functions of the parameter $p$. The entropic test rejects the model for a given $p$ whenever one of these functions become negative.\n For the calculation of the covariance matrix we choose feature maps that assign orthonormal vectors to the outcomes of the three random variables, thus being universal. As one can see from figure \\ref{FigEntropyBinary}, the semidefinite test starts to reject at higher values of $p$ than all the entropic tests, and is thus closer to the true value $p^{*}$ of the transition than any of the entropic tests.\n\n\n\n\n\n\\subsubsection{\\label{SecE1asymptotics}Asymptotics of the $E_1$ test}\n\n $E_1$ defined in (\\ref{ineqE1}) is the only of the entropic quantities (\\ref{ineqE1}) to (\\ref{ineqE6}) that solely includes mono- and bipartite marginals; the others also depend on the full tripartite distribution. Since the test based on $E_1$ and the semidefinite test thus are on `equal footing' in this regard, it appears relevant to pay some additional attention to the relation between these two tests. In section \\ref{SecTripartiteMonotone}, and in particular in Corollary \\ref{PropTransition}, we proved that the distribution $\\tilde{P}^{3:D}_p$, defined in equation (\\ref{ndvklanlkv}), satisfies the semidefinite test if and only if $p\\geq 1-1\/\\sqrt{2}$, irrespective of the alphabet size $D$. Hence, the `transition point' for the semidefinite test is independent of $D$ for this particular family of distributions. Here we shall show that the corresponding transition point for the test based on $E_1$ lies below $1-1\/\\sqrt{2}$, but asymptotically approaches this value as $D$ increases.\n\n\nThe family of distributions $\\tilde{P}^{3:D}_p$ in (\\ref{ndvklanlkv}) is permutation symmetric with respect to the three parties, and $E_1$ can, via equation (\\ref{BinaryAndMono}), be evaluated as\n\\begin{equation}\n\\label{nfajlbakl}\n\\begin{split}\nE_1 = & -3H(1) +2H(12)\\\\\n= & -3\\log D \\\\\n& -2(1-\\frac{1}{D}) p(2-p)\\log \\Big[ p(2-p)\\frac{1}{D^2}\\Big]\\\\\n& -2\\Big[(1-p)^2+ p(2-p)\\frac{1}{D}\\Big]\\log \\Big[(1-p)^2\\frac{1}{D}+ p(2-p)\\frac{1}{D^2}\\Big].\n\\end{split}\n\\end{equation}\n\n\n \n\n\n\n\nOne can confirm that $E_1(0) = -\\log D$, $E_1(1) = \\log D$, and \n\\begin{equation}\n\\begin{split}\n\\frac{dE_1}{dp} = 4(1- \\frac{1}{D})(1-p)\\log\\Big[1 + D\\frac{(1-p)^2}{p(2-p)}\\Big],\n\\end{split}\n\\end{equation}\nwhich is non-negative for $0\\leq p\\leq 1$. Hence, for each fixed $D$, the function $E_1$ is monotonically increasing for $0\\leq p\\leq 1$, and thus the equation $E_1(p) = 0$ has exactly one root, which is situated somewhere in the open interval $(0,1)$. Thus, analogous to the semidefinite test, the test based on $E_1$ will reject all elements in the family $\\tilde{P}^{3:D}_p$ below a certain transition point, and accept all distributions above that value.\nNext one can confirm that \n\\begin{equation*}\nE_1(1-\\frac{1}{\\sqrt{2}}) = \\log \\frac{2D}{D+1} +\\frac{1}{D} \\log\\frac{2^D}{ D+1}>0,\\quad D = 2,3,\\ldots.\n\\end{equation*}\nSince $E_1$ thus is monotonously increasing with respect to $p$, we can conclude that the root $\\tilde{p}$ of $E_1(\\tilde{p}) = 0$ is such that $\\tilde{p} < 1-1\/\\sqrt{2}$ for all $D\\geq 2$. Finally we wish to the determine the asymptotic value of the root $\\tilde{p}$ as $D\\rightarrow\\infty$. To this end we rewrite (\\ref{nfajlbakl}) such that we highlight the different orders of dependency on $D$,\n\\begin{equation}\n\\label{adflbnknbl}\n\\begin{split}\nE_1 =& 2\\big[1+\\frac{1}{\\sqrt{2}}-p\\big]\\big[p-(1 -\\frac{1}{\\sqrt{2}})\\big] \\log D\\\\\n& -2 p(2-p)\\log[p(2-p)] -4(1-p)^2\\log (1-p)\\\\\n& -2 p(2-p)\\frac{1}{D}\\log D\\\\\n& -2(1-p)^2\\log \\Big[1+\\frac{p(2-p)}{(1-p)^2}\\frac{1}{D}\\Big]\\\\\n& +2p(2-p)\\frac{1}{D}\\log\\left[\\frac{p (2-p)}{(1-p)^2}\\right]\\\\\n& -2p(2-p)\\frac{1}{D}\\log \\Big[1+\\frac{p(2-p)}{(1-p)^2}\\frac{1}{D}\\Big].\n\\end{split}\n\\end{equation}\nOn any interval $\\delta \\leq p \\leq 1-\\delta$, with $1\/2>\\delta >0$, the last four lines of (\\ref{adflbnknbl}) each approaches zero as $D\\rightarrow\\infty$.\nMoreover, one can note that the leading order term (in the first line) does for each fixed $D$ increase monotonically for $0 \\leq p\\leq 1$ and switches from negative to positive at $p = 1-1\/\\sqrt{2}$. If one fixes $\\epsilon >0$, one can realize that for all sufficiently large $D$, it is the case that $E_1(p)>0$ for all $1-1\/\\sqrt{2} +\\epsilon \\leq p\\leq 1-\\delta$, and $E_1(p) \\leq 0$ for all $0\\leq p\\leq 1-1\/\\sqrt{2}-\\epsilon$. We can thus conclude that the root $\\tilde{p}$\n of $E_1(\\tilde{p}) =0$ in the interval $0\\leq \\tilde{p}\\leq 1$ approaches $1-1\/\\sqrt{2}$ as $D\\rightarrow \\infty$.\n\n\n\n\\subsubsection{\\label{SecIncreasingAlphabets}Comparison on increasing alphabets}\n\n\\begin{figure}[h!]\n \\includegraphics[width= 15cm]{Figure7.pdf} \n\\caption{\\label{FigAlphabetSize} {\\bf Entropic versus semidefinite tests for increasing alphabet sizes.} \nFor the distribution $\\tilde{P}^{3:D}_{p}$ in (\\ref{ndvklanlkv}) we compare the entropic and semidefinite test as functions of $D$.\nHere we determine the smallest value of $p$ for which the respective test accepts $\\tilde{P}^{3:D}_{p}$, as a function of the local alphabet size $D$. From section \\ref{SecTripartiteMonotone} we know that the transition point for the semidefinite test is $p = 1-1\/\\sqrt{2} \\approx 0.29$, independently of $D$ (the red dashed line). We also plot (blue squares) the minimal value of $p$ for which all of the entropic inequalities (\\ref{ineqE1}) to (\\ref{ineqE6}) are satisfied, as a function of $D$. The transition point for this entropic test crosses the red line at $D = 32$. Hence, for the class of functions $\\tilde{P}^{3:D}_{p}$, the entropic tests becomes stronger than the semidefinite test for alphabet sizes beyond $32$. Finally, we plot (green circles) the minimal value of $p$ for which $E_1(p)\\geq 0$, as a function of $D$. By section \\ref{SecE1asymptotics} we know that this transition point asymptotically reaches $1-1\/\\sqrt{2}$.\n}\n\\end{figure}\n\n\nIn the previous section we found that the semidefinite test is stronger than the entropic one, for testing membership of distributions of the form $\\tilde{P}^{3:2}_p$. Here, we investigate how these two classes of tests compare when the size $D$ of the local alphabets increases. We know from section \\ref{SecTripartiteMonotone} that the semidefinite test is independent of $D$ for this particular family of distributions. It could thus potentially be the case that the entropic test would become stronger than the semidefinite test for sufficiently large alphabet sizes. This is indeed what we find in the numerical evaluation of the entropic test, which we display in figure \\ref{FigAlphabetSize}. \n\nAs pointed out in section \\ref{SecE1asymptotics}, all the entropic inequalities, apart from $E_1$, depend on the full tripartite distribution, while $E_1$ and the semidefinite test only utilize the bi- and mono-partite margins. We already know from the previous section that the test based on $E_1$ always is weaker that the semidefinite test for the family $\\tilde{P}^{3:D}_p$, but that it approaches the semidefinite test in the limit of large alphabet sizes $D$. As suggested by the plot in figure \\ref{FigAlphabetSize}, the convergence is very slow. As an additional indication one may note that for an alphabet size of $D = 10^7$ the root of the equation $E_1(p) = 0$ is $p\\approx 0.26$ while the limit is $p\\approx 0.29$. \n\n\n\n\n\n\n\\section{\\label{SecSummaryOutlook} Summary and outlook}\n\nIn this work we have considered the constraints imposed by a large class of causal structures on the covariance matrix of the observed variables. More specifically, we have shown that each bipartite DAG induces a decomposition that every covariance matrix resulting from the corresponding causal model has to satisfy. Such decompositions can be formulated in terms of semidefinite programs that allow for a straightforward and efficient computational treatment of the problem (as opposed to algebraic geometry solutions). A violation of the condition imposed by the bipartite DAG under test (or in other terms, the non-feasibility of the semidefinite program) thus implies that the observed covariance matrix is not compatible with it. We have also shown that every decomposition associated with a bipartite DAG can be realized by a causal model on that graph. \n\nFurthermore, we have made comparisons between the performance of the semidefinite test and tests based on information theoretic inequalities formulated in terms of entropies, where the results indicate that the semidefinite test outperforms the entropic test for moderate alphabet sizes of the random variables, while the latter become more powerful for large alphabet sizes.\n\nThese results open several directions for future research. Here, we have restricted attention to characterising the set of covariance matrices compatible with a given causal structure.\nIn real-world situations however, the covariance matrix is unknown and has to be estimated from a limited number of samples drawn from the underlying distribution.\nThis raises the question of how to turn the theory developed here into statistical hypothesis tests for a presumed causal stucture.\nAn obvious idea would be to construct a confidence region for the estimated covariance matrix and reject the hypothesis if the confidence region does not intersect the set compatible with the causal assumption.\nWe speculate, though, that it might be simpler to obtain statistically sound results by employing convex duality, as explained in the context of figure~\\ref{fig:witness}.\nIndeed, assume that $X$ is such that all compatible covariance matrices have non-negative inner product with $X$.\nThe inner product betweeen $X$ and the true covariance matrix is a scalar linear function of the distribution of the observable variables.\nA one-sided statistical hypothesis test for $\\mathrm{tr}\\,\\big(X\\,\\mathrm{Cov}(Y)\\big) \\leq 0$ with any desired significance level is therefore easy to construct.\nIt will automatically also test the causal hypothesis at the same significance level.\nWhile any $X$ gives rise to such a test, their power to identify a given true incompatible distribution may very wildly.\nOne way of making an informed choice for $X$ would be as follows: Split the samples into two parts.\nIf the empirical covariance matrix of the first part is compatible with the hypothesis, accept. \nIf not, the dual SDP (\\ref{eqn:dual}) will identify a witness $X^\\star$ that seperates the empirical matrix from the compatible set.\nNow use the test based on $X^\\star$ with the second part of the samples.\nWe leave the details to future work.\n\n\n\nAnother immediate question is to better understand the relation between the semidefinite and the entropic tests. Similarly, it would be highly desirable to combine our results with other tools that have very recently been proposed in order to characterize complex DAGs \\cite{Chaves2016,Rosset2016,wolfe2016inflation}. On a more general level it is noteworthy that by restricting to covariance we turn a highly non-linear problem into what essentially is a convex optimization. Understanding how far this can be pushed (considering higher order moments, for instance) would certainly give us new geometric insights on the nature of this problem.\nSince we here have focused on a setting where all correlations of observed variables are due to latent variables, it is very reasonable to ask if tests based on covariances can be extended to more general types of DAGs that do not have this bipartite structure.\n\nFrom a more fundamental perspective our work may have implications for the current research program on the foundations of quantum physics. Bayesian networks have attracted growing attention as means to understand the role of causality in quantum mechanical systems \\cite{Leifer2013,Fritz2012,Fritz2014,Henson2014,Chaves2015a,Piennar2014,ried2015quantum,Costa2016,horsman2016can}. One may thus ask whether the methods we have employed here can be generalized to the case of quantum causal structures, where for example some nodes in the graph represent quantum states without a classical analogue. Any positive results along this line would certainly be highly relevant in the context of quantum causal modeling and once more highlight the very fruitful interplay between the fields of causal inference and foundational aspects of quantum mechanics. \n\n\n\n\\begin{acknowledgments}\n\nWe thank Thomas Kahle and Johannes Textor for productive discussions during the early stages of this project.\n\nThis work has been supported by the Excellence Initiative of the German Federal and State Governments (Grants ZUK 43 and 81), the ARO under contract W911NF-14-1-0098 (Quantum Characterization, Verification, and Validation), and the DFG (SPP1798 CoSIP). \n\n\\end{acknowledgments}\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn \\cite{Davies1995}, E. B. Davies develops an abstract method for establishing off-diagonal estimates for the heat kernels of self-adjoint uniformly elliptic higher-order partial differential operators on $\\mathbb{R}^d$. In particular, Davies considers a general self-adjoint operator of the form\n\\begin{equation*}\nHf(x)=\\sum_{|\\alpha|,|\\beta|\\leq m}D^{\\alpha}\\left\\{a_{\\alpha,\\beta}(x)D^{\\beta}f(x)\\right\\}\n\\end{equation*}\nand studies the corresponding ``heat'' kernel, $K_H$, of $H$ and its properties; here, $D^{\\gamma}=(-i\\partial_{x_1})^{\\gamma_1}(-i\\partial_{x_2})^{\\gamma_2}\\cdots(-i\\partial_{x_d})^{\\gamma_d}$ for each multi-index $\\gamma$. Of course, when it exists, $K_H=K_H(t,x,y)$ is the integral kernel for the semigroup $\\{e^{-tH}\\}$ on $L^2$ generated by $H$ and is also recognized as the fundamental solution to the parabolic equation\n\\begin{equation*}\n(\\partial_t+H)u=0.\n\\end{equation*}\nWhen $H$ is uniformly elliptic, i.e., $H$ is comparable to the $m$-th power of the Laplacian $(-\\Delta)^m$, and under certain conditions discussed below, the method yields the estimate\n\\begin{equation}\\label{eq:EllipEst}\n|K_H(t,x,y)|\\leq \\frac{C_1}{t^{d\/2m}}\\exp\\left(-tC_2\\left|\\frac{x-y}{t}\\right|^{2m\/(2m-1)}+Mt\\right)\n\\end{equation}\nfor $t>0$, $x,y\\in\\mathbb{R}^d$, where $C_1,C_2$ and $M$ are positive constants. For the canonical case in which $H=(-\\Delta)^m$, this estimate, with $M=0$, is readily established using an optimization argument and the Fourier transform. As discussed in \\cite{Randles2017}, the optimization therein naturally selects the function $x\\mapsto C_2|x|^{2m\/(2m-1)}$ as the Ledendre-Fenchel transform of the symbol (or Fourier multiplier) $|\\xi|^{2m}$ of the operator $(-\\Delta)^m$. We encourage the reader to see the articles \\cite{Randles2017}, \\cite{Barbatis1996} and \\cite{Blunck2005} for discussion of the appearance of the Legendre-Fenchel transform in heat kernel estimates. In the case that $H$ is a second-order operator, i.e., $m=1$, this is the well-studied Gaussian estimate \\cite{Saloff-Coste2010}. The applications of estimates of the form \\eqref{eq:EllipEst} are legion. In particular, \\eqref{eq:EllipEst} guarantees that the semigroup $\\{e^{-tH}\\}$ extends to a strongly continuous semigroup $\\{e^{-tH_p}\\}$ on $L^p$ for all $1\\leq p<\\infty$ and moreover their generators, $H_p$, have spectra independent of $p$ \\cite{Davies1995}.\n\nIn the case that the coefficients $\\{a_{\\alpha,\\beta}(x)\\}$ of $H$ are bounded and H\\\"{o}lder continuous, Levi's parametrix method, adapted to parabolic equations by A. Friedman and S. D. Eidelman, guarantees that a continuous heat kernel $K_H$ exists and satisfies the estimate \\eqref{eq:EllipEst} \\cite{Friedman1964,Eidelman1969}. When the coefficients $\\{a_{\\alpha,\\beta}\\}$ are merely bounded and measurable, Davies' method yields the estimate \\eqref{eq:EllipEst} subject to a dimension-order restriction that $d\/2m<1$. The restriction can be weakened to $d\/2m\\leq 1$ by the method of \\cite{Auscher1998,terElst1997} but it cannot be weakened any further \\cite{Davies1997a,deGiorgi1968,Mazya1968}. Specifically, for each integer $m$ such that $d\/2m>1$, Davies \\cite{Davies1997a} constructs a uniformly elliptic self-adjoint operator $H$ of order $m$ (which is a system when $d$ is odd) with bounded coefficients (in fact, smooth away from the origin) whose semigroup $\\{e^{-tH}\\}$ cannot be extended to a strongly continuous semigroup on $L^p$ for all $1\\leq p<\\infty$ and therefore the estimate \\eqref{eq:EllipEst} cannot hold. Further discussion of this example can be found in \\cite{Davies1997}. \n\nMoving beyond the elliptic (isotropic) setting, in this article, we introduce a class of constant-coefficient partial differential operators, which we call positive-homogeneous operators. Introduced in \\cite{Randles2017}, these are hypoelliptic operators that interact well with certain dilations of the underlying space and they play the role that $(-\\Delta)^m$ plays in the elliptic theory. We then consider a class of variable-coefficient operators, each comparable to a positive-homogeneous operator and study their associated heat kernels. We show that Davies' method, with suitable modification, carries over into our naturally anisotropic setting. \n\nTo motivate our study, consider the constant-coefficient operator\n\\begin{equation*}\n\\Lambda=-\\partial_{x_1}^2+\\partial_{x_2}^4\n\\end{equation*}\non $\\mathbb{R}^2$. Though this operator is not elliptic, it has many properties shared by elliptic operators. It is, for example, hypoelliptic; this can be seen by studying its symbol,\n\\begin{equation*}\nR(\\xi)=R(\\xi_1,\\xi_2)=\\xi_1^2+\\xi_2^4.\n\\end{equation*}\nAs $(-\\Delta)^m$ plays well with (isotropic) dilations of $\\mathbb{R}^d$, $\\Lambda$ has the property that \n\\begin{equation*}\nt\\Lambda=\\delta_{1\/t}\\circ \\Lambda\\circ \\delta_t\n\\end{equation*}\nfor all $t>0$ where $\\delta_t(f)(x_1,x_2)=f(t^{1\/2}x_1,t^{1\/4}x_2)$ is given by the anisotropic dilation $(x_1,x_2)\\mapsto (t^{1\/2}x_1,t^{1\/4}x_2)$ of $\\mathbb{R}^2$; for this reason, $\\Lambda$ is said to be homogeneous. As discussed in \\cite{Randles2015a}, the homogeneity of $\\Lambda$ is essential for the appearance of its heat kernel\n\\begin{equation*}\nK_{\\Lambda}(t,x,y)=\\frac{1}{\\sqrt{2\\pi}}\\int_{\\mathbb{R}^2}e^{-i(x-y)\\cdot\\xi}e^{-tR(\\xi)}\\,d\\xi,\n\\end{equation*}\ndefined for $t>0$ and $x,y\\in\\mathbb{R}^2$, as an attractor for convolution powers of complex-valued functions, i.e., its appearance in local limit theorems. An optimization argument, similar to that for $K_{(-\\Delta)^m}$, gives the estimate\n\\begin{equation}\\label{eq:TestHKEst}\n|K_{\\Lambda}(t,x,y)|\\leq\\frac{C_1}{t^{\\omega_{\\Lambda}}}\\exp\\left(-tC_2R^{\\#}\\left(\\frac{x-y}{t}\\right)\\right)\n\\end{equation}\nfor $t>0$ and $x,y\\in\\mathbb{R}^2$ where\n\\begin{equation*}\nR^{\\#}(x)=R^{\\#}(x_1,x_2)=\\left(\\frac{x_1}{2}\\right)^2+3\\left(\\frac{x_2}{4}\\right)^{4\/3}\n\\end{equation*}\nis the Legendre-Fenchel transform of $R$ and $\\omega_{\\Lambda}=1\/2+1\/4=3\/4$ is known as the homogeneous order associated to $\\Lambda$. As we shall see, the homogeneous order $\\omega_{\\Lambda}$ depends on the order of derivatives appearing in $\\Lambda$ and on the dimension of the underlying space; it generalizes the exponent $d\/2m$ appearing in the prefactor in \\eqref{eq:EllipEst} governing small-time on-diagonal decay.\n\nBy analogy to the theory of self-adjoint uniformly elliptic operators and their heat kernel estimates, we then ask: For a self-adjoint variable-coefficient operator $H$ which is comparable to a homogeneous operator $\\Lambda$ with symbol $R$, under what conditions will the heat kernel for $H$ exists and satisfy an estimate of the form\n\\begin{equation*}\n|K_{H}(t,x,y)|\\leq \\frac{C_1}{t^{\\omega_{\\Lambda}}}\\exp\\left(-tC_2R^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)?\n\\end{equation*}\nIt was shown in \\cite{Randles2017}, using Levi's parametrix method adapted to our naturally anisotropic setting, that the above estimate is satisfied provided, in particular, that $H$ has H\\\"{o}lder continuous coefficients (see also \\cite{Eidelman1960}). In this article, we extend these results to the realm in which $H$ has bounded measurable coefficients. To this end, we employ the abstract method of E. B. Davies which we modify in two ways. First, we adapt Davies' single-variable optimization procedure, which produces the term in the exponent of \\eqref{eq:EllipEst}, to a multivariate optimization procedure suitably adapted to our anisotropic setting. In this way, we see the natural appearance of the Legendre-Fenchel transform. Our second modification to the theory allows for the dimension-order restriction $d\/2m<1$ ($\\omega_{\\Lambda}<1$ in our case) to be lifted provided that certain integer powers of $H$ also behave well in perturbation estimates. \n\n\n\n\n\\section{Preliminaries}\n\nAs discussed in \\cite{Randles2017}, to introduce the class of model operators considered in this article, it is useful to work in a framework which is coordinate-free. In view of the anisotropic nature of the problem we want to study, it is important to be free to choose coordinate systems adapted to each particular operator $\\Lambda$ at hand. To this end, we consider a $d$-dimensional real vector space $\\mathbb{V}$ equipped with the standard smooth structure; we do not affix $\\mathbb{V}$ with a norm or basis. The dual space of $\\mathbb{V}$ is denoted by $\\mathbb{V}^*$ and the dual pairing is denoted by $\\xi(x)$ for $x\\in\\mathbb{V}$ and $\\xi\\in\\mathbb{V}^*$. Let $dx$ and $d\\xi$ be Haar (Lebesgue) measures on $\\mathbb{V}$ and $\\mathbb{V}^*$, respectively, which we take to be suitably normalized so that our conventions for the Fourier transform and inverse Fourier transform, given below, make each unitary. Throughout this article, all functions on $\\mathbb{V}$ and $\\mathbb{V}^*$ are understood to be complex-valued. Given a non-empty open set $\\Omega\\subseteq \\mathbb{V}$, the usual Lebesgue spaces are denoted by $L^p(\\Omega)=L^p(\\Omega,dx)$ and equipped with their usual norms $\\|\\cdot\\|_p$ for $1\\leq p\\leq \\infty$. In the case that $p=2$, the corresponding inner product on $L^2(\\Omega)$ is denoted by $\\langle\\cdot,\\cdot\\rangle$. Of course, we will also work with $L^2(\\mathbb{V}^*)\n:=L^2(\\mathbb{V}^*,d\\xi)$; here the $L^2$-norm and inner product will be denoted by $\\|\\cdot\\|_{2^*}$ and $\\langle\\cdot,\\cdot\\rangle_*$ respectively. The Fourier transform $\\mathcal{F}:L^2(\\mathbb{V})\\to L^2(\\mathbb{V}^*)$ and inverse Fourier transform $\\mathcal{F}^{-1}:L^2(\\mathbb{V}^*)\\to L^2(\\mathbb{V})$ are defined, initially, for Schwartz functions $f\\in \\mathcal{S}(\\mathbb{V})$ and $g\\in\\mathcal{S}(\\mathbb{V}^*)$ by the formulas\n\\begin{eqnarray*}\n\\mathcal{F}(f)(\\xi)=\\hat{f}(\\xi)=\\int_{\\mathbb{V}}e^{i\\xi(x)}f(x)\\,dx && (\\xi\\in\\mathbb{V}^*)\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\n\\mathcal{F}^{-1}(g)(x)=\\check{g}(x)=\\int_{\\mathbb{V}^*}e^{-i\\xi(x)}g(\\xi)\\,d\\xi && (x\\in\\mathbb{V}).\n\\end{eqnarray*}\n\n\n\\noindent The symbols $\\mathbb{R,C,Z}$ mean what they usually do; $\\mathbb{N}$ denotes the set of non-negative integers. The symbols $\\mathbb{R}_+$ and $\\mathbb{N}_+$ denote the set of strictly positive elements of $\\mathbb{R}$ and $\\mathbb{N}$, respectively, and $\\mathbb{C}_+$ denotes the set of complex numbers $z$ for which $\\Re(z)>0$. Also, $\\mathbb{R}_+^d$ and $\\mathbb{N}_+^d$ respectively denote the set of $d$-tuples of $\\mathbb{R}_+$ and $\\mathbb{N}_+$. Adopting the summation notation for semi-elliptic operators presented in L. H\\\"{o}rmander's treatise \\cite{Hormander1983}, for a fixed $\\mathbf{m}=(m_1,m_2,\\dots,m_d)\\in\\mathbb{N}_+^d$, we write\n\\begin{equation*}\n|\\beta:\\mathbf{m}|=\\sum_{k=1}^d\\frac{\\beta_k}{m_k}\n\\end{equation*} for all multi-indices $\\beta=(\\beta_1,\\beta_2,\\dots,\\beta_d)\\in\\mathbb{N}^d$.\\\\\n\n\\noindent For the rest of this section, $W$ will denote a $d$-dimensional real vector space (meaning $\\mathbb{V}$ or $\\mathbb{V}^*$) and $\\Omega$ will denote an open subset of $W$. The space of smooth functions on $\\Omega$ is denoted by $C^\\infty(\\Omega)$ and the space of smooth functions with compact support in $\\Omega$ is denoted by $C_0^\\infty(\\Omega)$. Taking $C_0^\\infty(\\Omega)$ to be equipped with its usual topology given by semi-norms, its dual space, the space of distributions, is denoted by $\\mathcal{D}'(\\Omega)$. Given $w\\in W$, the derivation $D_{w}:\\mathcal{D}'(\\Omega)\\to\\mathcal{D}'(\\Omega)$ is originally defined for $f\\in C_0^\\infty(\\Omega)$ by the formula\n\\begin{equation*}\n(D_wf)(x)=i\\partial_wf(x)=i\\left(\\lim_{t\\to 0}\\frac{f(x+tw)-f(x)}{t}\\right)\n\\end{equation*}\nfor $x\\in \\Omega$. Further, given a basis $\\mathbf{w}=\\{w_1,w_2,\\dots,w_d\\}$ of $W$, we introduce, for each multi-index $\\beta\\in \\mathbb{N}^d$, the differential operator $D_{\\mathbf{w}}^{\\beta}:\\mathcal{D}'(\\Omega)\\to\\mathcal{D}'(\\Omega)$ defined by\n\\begin{equation*}\nD_{\\mathbf{w}}^\\beta=(D_{w_1})^{\\beta_1}(D_{w_2})^{\\beta_2}\\cdots(D_{w_d})^{\\beta_d}.\n\\end{equation*}\nWe shall denote by $\\mbox{End}(W)$ and $\\mbox{Gl}(W)$ the set of endomorphisms and isomorphisms of $W$ respectively. Given $E\\in\\mbox{End}(W)$, we consider the one-parameter group $\\{t^E\\}_{t>0}\\subseteq \\mbox{Gl}(W)$ defined by \n\\begin{equation*}\nt^E=\\exp((\\log t)E)=\\sum_{k=0}^{\\infty}\\frac{(\\log t)^k}{k!}E^k\n\\end{equation*}\nfor $t>0$. These one-parameter subgroups of $\\mbox{Gl}(W)$ allow us to define continuous one-parameter groups of operators on the space of distributions as follows: Given $E\\in\\mbox{End}(W)$ and $t>0$, first define $\\delta_t^E(f)$ for $f\\in C_0^{\\infty}(W)$ by $\\delta_t^E(f)(x)=f(t^Ex)$ for $x\\in W$. Extending this to the space of distributions on $W$ in the usual way, the collection $\\{\\delta_t^E\\}_{t>0}$ is a continuous one-parameter group of operators on $\\mathcal{D}'(W)$. In the next section, we shall use these one-parameter groups to define a notion of homogeneity for partial differential operators. Given $\\alpha=(\\alpha_1,\\alpha_2,\\dots,\\alpha_d)\\in\\mathbb{R}_+^d$ and a basis $\\mathbf{w}=\\{w_1,w_2,\\dots,w_d\\}$ of $W$, we denote by $E_{\\mathbf{w}}^\\alpha$ the isomorphism of $W$ defined by\n\\begin{equation}\\label{eq:DefofE}\nE_{\\mathbf{w}}^\\alpha w_k=\\frac{1}{\\alpha_k}w_k\n\\end{equation}\nfor $k=1,2,\\dots, d$.\\\\\n\n\\noindent Finally, given a basis $\\mathbf{w}=\\{w_1,w_2,\\dots,w_d\\}$ of $W$, we define the map $\\phi_{\\mathbf{w}}:W\\rightarrow\\mathbb{R}^d$ by setting $\\phi_{\\mathbf{w}}(w)=(x_1,x_2,\\dots,x_d)$ whenever $w=\\sum_{l=1}^d x_l w_l$. This map defines a global coordinate system on $W$; any such coordinate system is said to be a linear coordinate system on $W$. By definition, a polynomial on $W$ is a function $P:W\\rightarrow\\mathbb{C}$ that is a polynomial function in some (and hence any) linear coordinate system on $W$. Of course, in the linear coordinate system defined by $\\mathbf{w}$, each polynomial can be expressed as a linear combination of monomials of the form \n\\begin{equation}\\label{eq:Monomial}\nw_{\\mathbf{w}}^{\\beta}=(x_1)^{\\beta_1}(x_2)^{\\beta_2}\\cdots(x_d)^{\\beta_d}\n\\end{equation}\nwhere $\\beta=(\\beta_1,\\beta_2,\\dots,\\beta_d)\\in\\mathbb{N}^d$ and $\\phi_\\mathbf{w}(w)=(x_1,x_2,\\dots,x_d)$ as above. We say that a polynomial $P$ is positive-definite if its real part, $R=\\Re P$, is non-negative and has $R(w)=0$ only when $w=0$. \\\\\n\n\n\n\n\n\n\n\\section{Homogeneous operators}\\label{sec:HomogeneousOperators}\nIn this section we introduce a class of homogeneous constant-coefficient partial differential operators on $\\mathbb{V}$. These operators will serve as ``model'' operators in our theory in the way that integer powers of the Laplacian serve a model operators in the elliptic theory of partial differential equations. To this end, let $\\Lambda$ be a constant-coefficient partial differential operator on $\\mathbb{V}$ and let $P:\\mathbb{V}^*\\rightarrow\\mathbb{C}$ be its symbol. Specifically, $P$ is the polynomial on $\\mathbb{V}^*$ defined by $P(\\xi)=e^{-i\\xi(x)}\\Lambda(e^{i\\xi(x)})$ for $\\xi\\in\\mathbb{V}^*$ (this is independent of $x\\in\\mathbb{V}$ precisely because $\\Lambda$ is a constant-coefficient operator). We first introduce the following notion of homogeneity of operators; it is mirrored by an analogous notion for symbols which we define shortly. \n\n\\begin{definition}\nGiven $E\\in\\mbox{End}(\\mathbb{V})$, we say that a constant-coefficient partial differential operator $\\Lambda$ is homogeneous with respect to the one-parameter group $\\{\\delta_t^E\\}$ if\n\\begin{equation*}\n\\delta_{1\/t}^E\\circ \\Lambda\\circ \\delta_t^E=t\\Lambda\n\\end{equation*}\nfor all $t>0$; in this case we say that $E$ is a member of the exponent set of $\\Lambda$ and write $E\\in\\Exp(\\Lambda)$. \n\\end{definition}\n\n\\noindent A constant-coefficient partial differential operator $\\Lambda$ need not be homogeneous with respect to a unique one-parameter group $\\{\\delta_t^E\\}$, i.e., $\\Exp(\\Lambda)$ is not necessarily a singleton. For instance, it is easily verified that, for the Laplacian $-\\Delta$ on $\\mathbb{R}^d$,\n\\begin{equation*}\n\\Exp(-\\Delta)=2^{-1}I+\\mathfrak{o}_d\n\\end{equation*}\nwhere $I$ is the identity and $\\mathfrak{o}_d$ is the Lie algebra of the orthogonal group, i.e., is given by the set of skew-symmetric matrices.\\\\\n\n\n\n\n\n\n\n\n\n\\noindent Given a constant coefficient operator $\\Lambda$ with symbol $P$, one can quickly verify that $E\\in\\Exp(\\Lambda)$ if and only if\n\\begin{equation}\\label{eq:homofsymbol}\ntP(\\xi)=P(t^F\\xi)\n\\end{equation}\nfor all $t>0$ and $\\xi\\in\\mathbb{V}^*$ where $F=E^*$ is the adjoint of $E$. More generally, if $P$ is any continuous function on $W$ and \\eqref{eq:homofsymbol} is satisfied for some $F\\in\\mbox{End}(W)$, we say that $P$ \\textit{is homogeneous with respect to} $\\{t^F \\}$ and write $F\\in\\Exp(P)$. This admitted slight abuse of notation should not cause confusion. In this language, we see that $E\\in \\Exp(\\Lambda)$ if and only if $E^*\\in\\Exp(P)$.\\\\\n\n\\noindent We remark that the notion of homogeneity defined above is similar to that put forth for homogeneous operators on homogeneous (Lie) groups, e.g., Rockland operators \\cite{Folland1982}. The difference is mostly a matter of perspective: A homogeneous group $G$ is equipped with a fixed dilation structure, i.e., it comes with a one-parameter group $\\{\\delta_t\\}$, and homogeneity of operators is defined with respect to this fixed dilation structure. By contrast, we fix no dilation structure on $\\mathbb{V}$ and formulate homogeneity in terms of an operator $\\Lambda$ and the existence of a one-parameter group $\\{\\delta_t^E\\}$ that plays well with $\\Lambda$ in the sense defined above. As seen in the study of convolution powers on the square lattice (see \\cite{Randles2015a}), it useful to have this freedom.\n\n\n\n\n\n\n\n\n\n\\begin{definition}\\label{def:HomogeneousOperators}\nLet $\\Lambda$ be constant-coefficient partial differential operator on $\\mathbb{V}$ with symbol $P$. We say that $\\Lambda$ is a positive-homogeneous operator if $P$ is a positive-definite polynomial and $\\Exp(\\Lambda)$ contains a diagonalizable endomorphism.\n\\end{definition}\n\n\\noindent As discussed above, for a positive-homogeneous operator $\\Lambda$, $\\Exp(\\Lambda)$ need not be a singleton. However, Lemma 2.10 of \\cite{Randles2017} guarantees that, for any $E_1,E_2\\in\\Exp(\\Lambda)$, \n\\begin{equation*}\n\\tr E_1=\\tr E_2.\n\\end{equation*}\nThus, to each positive-homogeneous operator $\\Lambda$ we define the \\textit{homogeneous order} of $\\Lambda$ to be the number\n\\begin{equation*}\n\\mu_{\\Lambda}=\\tr E\n\\end{equation*}\nfor any $E\\in\\Exp(\\Lambda)$. We note that the term ``homogeneous order'' does not coincide with the usual ``order\" for partial differential operators. For instance, the Laplacian $-\\Delta$ on $\\mathbb{R}^d$ is a second-order operator, however, because $2^{-1}I\\in \\Exp(-\\Delta)$, its homogeneous order is $\\mu_{(-\\Delta)}=\\tr (2^{-1}I)=d\/2$.\\\\\n\n\n\n\n\\noindent The proposition below shows, in particular, that every positive-homogeneous operator on $\\mathbb{V}$ is semi-elliptic \\cite{Browder1957, Hormander1983} in some coordinate system. For a proof, see Section 2 of \\cite{Randles2017}.\n\n\\begin{proposition}\\label{prop:OperatorRepresentation}\nLet $\\Lambda$ be a positive-homogeneous operator on $\\mathbb{V}$. Then there exist a basis $\\mathbf{v}=\\{v_1,v_2,\\dots,v_d\\}$ of $\\mathbb{V}$ and $\\mathbf{m}=(m_1,m_2,\\dots,m_d)\\in\\mathbb{N}_+^d$ for which\n\\begin{equation}\\label{eq:OperatorRepresentation1}\n\\Lambda=\\sum_{|\\beta:\\mathbf{m}|=2}a_{\\beta}D_{\\mathbf{v}}^\\beta. \n\\end{equation}\nwhere $\\{a_{\\beta}\\}\\subseteq\\mathbb{C}$. The isomorphism $E_{\\mathbf{v}}^{2\\mathbf{m}}\\in\\mbox{Gl}(\\mathbb{V})$, defined by \\eqref{eq:DefofE}, is a member of $\\Exp(\\Lambda)$ and therefore\n\\begin{equation*}\n\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|=\\frac{1}{2m_1}+\\frac{1}{2m_2}+\\cdots+\\frac{1}{2m_d}\n\\end{equation*}\nwhere $\\mathbf{1}:=(1,1,\\dots,1)\\in \\mathbb{N}^d$. Furthermore, if $\\mathbf{v}^*$ denotes the dual basis on $\\mathbb{V}^*$ for the basis $\\mathbf{v}$, \n\\begin{equation*}\nP(\\xi)=\\sum_{|\\beta:\\mathbf{m}|=2}a_\\beta\\xi^{\\beta}\n\\end{equation*}\nwhere $\\xi^\\beta=\\xi_{\\mathbf{v}^*}^\\beta$ as in \\eqref{eq:Monomial} and the isomorphism $E_{\\mathbf{v}*}^{2\\mathbf{m}}$ is a member of $\\Exp(P)$.\n\\end{proposition}\n\n\\noindent We remark that, if a given positive-homogeneous operator $\\Lambda$ is symmetric in the sense that $\\langle \\Lambda f,g\\rangle=\\langle f,\\Lambda g\\rangle$ for all $f,g\\in C_0^{\\infty}(\\mathbb{V})$, then its symbol $P$ is necessarily real-valued, i.e., $R=\\Re P=P$, and the coefficients $\\{a_{\\beta}\\}$ of Proposition \\ref{prop:OperatorRepresentation} are real numbers.\\\\\n\n\n\n\n\\section{Sobolev spaces, positive-homogeneous operators and their sesquilinear forms}\n\n\\noindent In the first part of this section, we define a family of Sobolev spaces on $\\mathbb{V}$. These spaces, which include those of the classical elliptic theory, were also discussed in the context of $\\mathbb{R}^d$ in \\cite{Kannai1969} using coordinates. Then, given a symmetric positive-homogeneous operator $\\Lambda$ on $\\mathbb{V}$ with symbol $R$, we study the symmetric sesquilinear form $Q_{\\Lambda}$ it defines. We then realize $\\Lambda$ as a self-adjoint operator on $L^2$ whose domain and form domain are characterized by the previously defined Sobolev spaces; everything here relies on the semi-elliptic representation of positive-homogeneous operators given in Proposition \\ref{prop:OperatorRepresentation}.\\\\\n\n\\noindent Let $1\\leq p< \\infty$, $\\mathbf{m}\\in \\mathbb{N}_+^d$ and let $\\mathbf{v}$ be a basis for $\\mathbb{V}$. For a non-empty open set $\\Omega\\subseteq \\mathbb{V}$, define\n\\begin{equation*}\nW^{\\mathbf{m},p}_{\\mathbf{v}}(\\Omega)=\\left\\{f\\in L^p(\\Omega):D_{\\mathbf{v}}^\\alpha f\\in L^p(\\Omega)\\hspace{.1cm}\\forall\\hspace{.1cm}\\alpha\\mbox{ with }|\\alpha:\\mathbf{m}|\\leq 1\\right\\}.\n\\end{equation*}\nFor any $f\\in W^{\\mathbf{m},p}_{\\mathbf{v}}(\\Omega)$ let\n\\begin{equation*}\n\\|f\\|_{W_{\\mathbf{v}}^{\\mathbf{m},p}(\\Omega)}=\\left[\\sum_{|\\alpha:\\mathbf{m}|\\leq 1}\\int_\\Omega|D_{\\mathbf{v}}^\\alpha f|^pdx\\right]^{1\/p}.\n\\end{equation*}\nClearly, $\\|\\cdot\\|_{W_{\\mathbf{v}}^{\\mathbf{m},p}(\\Omega)}$ is a norm on $W^{\\mathbf{m},p}_{\\mathbf{v}}(\\Omega)$ and the usual arguments show that $W^{\\mathbf{m},p}_{\\mathbf{v}}(\\Omega)$ is a Banach space in this norm. Naturally, we will call these spaces \\textit{Sobolev spaces}; in the context of $\\mathbb{R}^d$, these spaces were previously studied in \\cite{Demidenko1993} and \\cite{Kannai1969}. Notice that when $\\mathbb{V}=\\mathbb{R}^d$, $\\mathbf{v}=\\mathbf{e}$ and $\\mathbf{m}=(m,m,\\dots,m)$, our definition coincides with that of $W^{m,p}(\\Omega)$, the standard Sobolev spaces of $\\mathbb{R}^d$ where, in this case, the basis is immaterial. Let us also denote by $W_{\\mathbf{v},0}^{\\mathbf{m},p}(\\Omega)$ the closure of $C_0^{\\infty}(\\Omega)$ in the $\\|\\cdot\\|_{W_{\\mathbf{v}}^{\\mathbf{m},p}}(\\Omega)$ norm.\\\\\n\n\\noindent Temporarily, we restrict our attention to the case where $\\Omega=\\mathbb{V}$ and $p=2$. As one can check by the use of smooth cut-off functions and mollification, $C_0^{\\infty}(\\mathbb{V})$ is dense in $W_{\\mathbf{v}}^{\\mathbf{m},p}(\\mathbb{V})$. The following result follows by the standard method, c.f., \\cite{Lieb2001}; its proof is omitted.\n\n\\begin{lemma}\\label{charsobolevbyfourierlem}\nLet $\\mathbf{m}\\in\\mathbb{N}^d$, $\\mathbf{v}$ be a basis of $\\mathbb{V}$ and $\\mathbf{v}^*$ be the corresponding dual basis. Then\n\\begin{equation}\\label{charsobolevlemeq}\nW_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})=\\left\\{f\\in L^2(\\mathbb{V}): \\xi^{\\alpha}\\hat{f}(\\xi)\\in L^2(\\mathbb{V}^*)\\hspace{.1cm}\\forall\\hspace{.1cm}\\alpha\\mbox{ with }|\\alpha:\\mathbf{m}|\\leq 1\\right\\}\n\\end{equation}\nand\n\\begin{equation*}\n\\|f\\|^2_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})}=\\sum_{|\\alpha:\\mathbf{m}|\\leq 1}\\|\\xi^{\\alpha}\\hat{f}(\\xi)\\|_{2^*}^2\n\\end{equation*}\nwhere $\\xi^{\\alpha}=\\xi_{\\mathbf{v}^*}^\\alpha$ as in \\eqref{eq:Monomial}.\n\\end{lemma}\n\n\n\n\\begin{lemma}\\label{charsobolevbyfourierlem2}\nLet $\\Lambda$ be a symmetric positive-homogeneous operator with symbol $R$ and, in view of Proposition \\ref{prop:OperatorRepresentation}, let $\\mathbf{m}\\in\\mathbb{N}_+^d$ and $\\mathbf{v}$ be a basis of $\\mathbb{V}$ as guaranteed by the proposition. \nThen\n\\begin{equation*}\nW_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})=\\left\\{f\\in L^2(\\mathbb{V}): \\int_{\\mathbb{V^*}}R(\\xi)|\\hat{f}(\\xi)|^2d\\xi<\\infty\\right\\}\n\\end{equation*}\nand moreover, the norms\n\\begin{equation*}\n\\|f\\|':=\\left(\\|f\\|_2^2+\\int_{\\mathbb{V^*}}R(\\xi)|\\hat{f}(\\xi)|^2d\\xi\\right)^{1\/2}\n\\end{equation*}\nand $\\|\\cdot\\|_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})}$ are equivalent.\n\\end{lemma}\n\\begin{proof}\nBy an appeal to Proposition \\ref{prop:OperatorRepresentation} and Lemma \\ref{lem:Scaling}, we obtain positive constants $C$ and $C'$ for which\n\\begin{equation*}\nC(1+R(\\xi))\\leq \\sum_{|\\alpha:\\mathbf{m}|\\leq 1}\\xi^{2\\alpha}\\leq C'( 1+R(\\xi))\n\\end{equation*}\nfor all $\\xi\\in\\mathbb{V}^*$. With this estimate, the result follows directly from Lemma \\ref{charsobolevbyfourierlem} using the Fourier transform.\n\\end{proof}\n\n\\noindent Returning to the general situation, let $\\Omega\\subseteq \\mathbb{V}$ be a non-empty open set. For $f\\in L^2(\\Omega)$, define $f_*\\in L^2(\\mathbb{V})$ by\n\\begin{equation}\\label{eq:ExtensionDefinition}\nf_*(x)=\n\\begin{cases}\nf(x)&\\mbox{ if }x\\in\\Omega\\\\\n0&\\mbox{ otherwise.}\n\\end{cases}\n\\end{equation}\nOf course, $\\|f\\|_{L^2(\\Omega)}=\\|f_*\\|_{L^2(\\mathbb{V})}$. The following lemma shows that $W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$ is continuously embedded in $W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})$:\n\n\\begin{lemma}\\label{lem:SobolevEmbedding}\nFor any $f\\in W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$, $f_*\\in W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})$ and\n\\begin{equation*}\n\\|f\\|_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\Omega)}=\\|f_*\\|_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})}.\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nLet $f\\in W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$ and let $\\{f_n\\}\\subseteq C_0^{\\infty}(\\Omega)$ for which $\\|f_n-f\\|_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\Omega)}\\rightarrow 0$ as $n\\rightarrow \\infty$. Then for any $\\phi\\in C_0^{\\infty}(\\mathbb{V})$ and multi-index $\\alpha$ for which $|\\alpha:\\mathbf{m}|\\leq 1$,\n\\begin{multline*}\n\\int_{\\mathbb{V}}f_*(D_{\\mathbf{v}}^{\\alpha}\\phi) dx=\\int_{\\Omega}f (D_{\\mathbf{v}}^{\\alpha}\\phi) dx=\\lim_{n\\rightarrow \\infty}\\int_{\\Omega}f_n (D_{\\mathbf{v}}^{\\alpha}\\phi) dx\\\\\n=\\lim_{n\\rightarrow \\infty}(-1)^{|\\alpha|}\\int_{\\Omega}(D_{\\mathbf{v}}^{\\alpha} f_n)\\phi dx=(-1)^{|\\alpha|}\\int_{\\Omega} (D_{\\mathbf{v}}^\\alpha f )\\phi dx\\\\\n=(-1)^{|\\alpha|}\\int_{\\mathbb{V}}(D_{\\mathbf{v}}^{\\alpha}f)_*\\phi dx\n\\end{multline*}\nwhere we used the fact that each $f_n$ has compact support in $\\Omega$ and thus partial integration produces no boundary terms. Thus for each such $\\alpha$, $D_{\\mathbf{v}}^{\\alpha}f_*=(D_{\\mathbf{v}}^{\\alpha}f)_*\\in L^2(\\mathbb{V})$ and $\\|D_{\\mathbf{v}}^{\\alpha}f\\|_{L^2(\\Omega)}=\\|D_{\\mathbf{v}}^{\\alpha}f_*\\|_{L^2(\\mathbb{V})}$ from which the result follows.\n\\end{proof}\n\n\n\n\\noindent We now turn to positive-homogeneous operators, viewed in the $L^2$ setting and their associated sesquilinear forms. Let $\\Omega\\subseteq \\mathbb{V}$ be a non-empty open set and let $\\Lambda$ be a positive-homogeneous operator on $\\mathbb{V}$ with symbol $R$ and let $\\mathbf{m}\\in\\mathbb{N}_+^d$ and $\\mathbf{v}$ be the basis of $\\mathbb{V}$ guaranteed by Proposition \\ref{prop:OperatorRepresentation}. Define\n\\begin{equation*}\n\\mbox{\\rm Dom}(Q_{\\Lambda_\\Omega})=W_{0,\\mathbf{v}}^{\\mathbf{m},2}(\\Omega)\n\\end{equation*}\nand for each $f,g\\in \\mbox{\\rm Dom}(Q_{\\Lambda_\\Omega})$, put\n\\begin{equation*}\nQ_{\\Lambda_\\Omega}(f,g)=\\int_{\\mathbb{V}^*}R (\\xi)\\widehat{f_*}(\\xi)\\overline{\\widehat{g_*}(\\xi)}d\\xi.\n\\end{equation*}\n\n\\begin{proposition}\\label{prop:DirichletOperator}\nThen the restriction $\\Lambda\\vert_{C_0^{\\infty}(\\Omega)}$ extends to a non-negative self-adjoint operator on $L^2(\\Omega)$, denoted by $\\Lambda_{\\Omega}$. Its associated symmetric sesquilinear form is $Q_{\\Lambda_\\Omega}$ and has $\\mbox{\\rm Dom}(Q_{\\Lambda_\\Omega})=W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)=\\mbox{\\rm Dom}(\\Lambda_{\\Omega}^{1\/2})$. Moreover, $C_0^{\\infty}(\\Omega)$ is a core for $Q_{\\Lambda_\\Omega}$.\n\\end{proposition}\n\\begin{remark}\nThe self-adjoint operator $\\Lambda_{\\Omega}$ is the Dirichlet operator on $\\Omega$, i.e., the operator associated with Dirichlet boundary conditions.\n\\end{remark}\n\n\\begin{comment}\n\\begin{remark}\nOne can show that $\\mbox{\\rm Dom}(\\Lambda_{\\Omega})=W_{\\mathbf{v},0}^{2\\mathbf{m},2}(\\Omega)$. This fact however isn't needed for our development.\n\\end{remark}\n\\end{comment}\n\n\\begin{proof}[Proof of Proposition \\ref{prop:DirichletOperator}]\nIn view of Lemma \\ref{charsobolevbyfourierlem2}, there are constants $C,C'>0$ for which\n\\begin{equation*}\nC\\|f\\|_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})}\\leq\\left ( \\|f\\|_{L^2(\\mathbb{V})}^2+\\int_{\\mathbb{V}^*}R(\\xi)|\\hat{f}(\\xi)|^2d\\xi\\right)^{1\/2}\\leq C'\\|f\\|_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})}\n\\end{equation*}\nfor all $f\\in W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})$. Thus by Lemma \\ref{lem:SobolevEmbedding},\n\\begin{equation*}\nC\\|f\\|_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\Omega)}\\leq \\left(\\|f\\|_{L^2(\\Omega)}^2+Q_{\\Lambda_\\Omega}(f)\\right)^{1\/2}\\leq C'\\|f\\|_{W_{\\mathbf{v}}^{\\mathbf{m},2}(\\Omega)}\n\\end{equation*}\nfor all $f\\in W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$. It follows that\n\\begin{equation*}\n \\|f\\|'_{\\Omega}:=\\left (\\|f\\|_{L^2(\\Omega)}^2+Q_{\\Lambda_\\Omega}(f)\\right)^{1\/2}\n\\end{equation*}\ndefines a norm on $W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$, equivalent to the norm $\\|\\cdot\\|_{ W_{\\mathbf{v}}^{\\mathbf{m},2}(\\Omega)}$. From this we can also conclude that $Q_{\\Lambda_\\Omega}$ is a \\emph{bona fide} sesquilinear form with domain $\\mbox{\\rm Dom}(Q_{\\Lambda_\\Omega})=W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$. \n\nIn view of the positive-definiteness of $R$, it is easy to see that $Q_{\\Lambda_\\Omega}$ is symmetric, positive-definite (in the sense of forms) and densely defined. We claim that $Q_{\\Lambda_\\Omega}$ is closed. Indeed, let $\\{f_n\\}\\subseteq W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$ be a $Q_{\\Lambda_\\Omega}$-Cauchy sequence and such that $f_n\\rightarrow f$ in $L^2(\\Omega)$ for some $f\\in L^2(\\Omega)$. Because the norms $\\|\\cdot\\|'_{\\Omega}$ and $\\|\\cdot\\|_{ W_{\\mathbf{v}}^{\\mathbf{m},2}(\\Omega)}$ are equivalent, we know that $\\{f_n\\}$ is also a Cauchy sequence in $W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$ and so it converges. Moreover, as the topology on $ W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$ is finer than the topology induced by the $L^2(\\Omega)$ norm, we can conclude that $f\\in W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$ and $f_n\\rightarrow f$ in $ W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$. By again appealing to the equivalence \nof norms, it follows that $Q_{\\Lambda_\\Omega}$ is closed and, upon noting that $C_0^{\\infty}(\\Omega)$ is dense in $W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$, it is evident that $C_0^{\\infty}(\\Omega)$ is a core for $Q_{\\Lambda_\\Omega}$.\n\nIn view of the theory of symmetric sesquilinear forms, $Q_{\\Lambda_\\Omega}$ has a unique associated non-negative self-adjoint operator $\\Lambda_{\\Omega}$ with $\\mbox{\\rm Dom}(\\Lambda_{\\Omega}^{1\/2})=\\mbox{\\rm Dom}(Q_{\\Lambda_\\Omega})$. Also, because\n\\begin{equation*}\n\\langle \\Lambda f,g\\rangle_{\\Omega}=\\langle \\Lambda f_*,g_*\\rangle=\\int_{\\mathbb{V}^*}R(\\xi)\\hat{f}_*(\\xi)\\overline{\\hat{g}_*(\\xi)}d\\xi=Q_{\\Lambda_\\Omega}(f,g)=\\langle f,\\Lambda g\\rangle_{\\Omega}\n\\end{equation*}\nfor all $f,g\\in C_0^{\\infty}(\\Omega)$, $\\Lambda_{\\Omega}$ must be a self-adjoint extension of $\\Lambda\\vert_{C_0^{\\infty}(\\Omega)}$.\n\\end{proof}\n\n\\begin{remark}\nIt should be pointed out that $\\Lambda\\vert_{C_0^{\\infty}(\\Omega)}$ is not generally essentially self-adjoint; for instance one can consider the Dirichlet and Neumann operators when $\\Omega$ is, say, a bounded open non-empty subset of $\\mathbb{V}$. \n\\end{remark}\n\n\\noindent Our final proposition of this section addresses the essential self-adjointness of $\\Lambda$ in the case that $\\Omega=\\mathbb{V}$. The proof is included for the convenience of the reader.\n\\begin{proposition}\\label{prop:esa}\nThe operator $\\Lambda\\vert_{C_0^\\infty(\\mathbb{V})}$ is essentially self-adjoint and its closure $\\Lambda=\\Lambda_{\\mathbb{V}}$ has\n\\begin{equation*}\n\\mbox{\\rm Dom}(\\Lambda)=W_{\\mathbf{v}}^{2\\mathbf{m},2}(\\mathbb{V}).\n\\end{equation*}\n\\end{proposition}\n\\begin{proof}\nWe first show the essential self-adjointness of $\\Lambda\\vert_{C_0^\\infty(\\mathbb{V})}$. To this end, let $f\\in \\mbox{Ran}(\\Lambda\\vert_{C_0^{\\infty}(\\mathbb{V})}\\pm i)^\\perp$ and, in view of the unitarity of the Fourier transform, observe that\n\\begin{equation*}\n0=\\langle f,(\\Lambda\\pm i)g\\rangle=\\langle \\hat{f},(R\\pm i)\\hat{g}\\rangle_*=\\langle (R\\mp i)\\hat{f},\\hat{g}\\rangle_*\n\\end{equation*}\nfor all $g\\in C_0^{\\infty}(\\mathbb{V})$. We know that $\\mathcal{F}(C_0^{\\infty}(\\mathbb{V}))$ is dense in $L^2(\\mathbb{V}^*)$ and so it follows that $(R(\\xi)\\pm i)\\hat{f}(\\xi))=0$ almost everywhere. Using the fact that $R$ is real-valued, we conclude that $f=0$ and so $\\mbox{Ran}(\\Lambda\\vert_{C_0^{\\infty}(\\mathbb{V})}\\pm i)^\\perp=\\{0\\}$. This implies that $\\mbox{Ran}(\\Lambda\\vert_{C_0^{\\infty}(\\mathbb{V})}\\pm i)$ is dense in $L^2(\\mathbb{V})$ and thus $\\Lambda\\vert_{C_0^{\\infty}(\\mathbb{V})}$ is essentially self-adjoint in view of von Neumann's criteria. We denote this unique self-adjoint extension by $\\Lambda$.\n\nWe now characterize the domain of $\\Lambda$. Let $f\\in \\mbox{\\rm Dom}(\\Lambda)$ take a sequence $\\{f_n\\}\\subseteq C^{\\infty}_0(\\mathbb{V})$ for which $f_n\\to f$ and $\\Lambda f_n\\to \\Lambda f$ in the sense of $L^2(\\mathbb{V})$. For any multi-index $\\alpha$ for which $|\\alpha:2\\mathbf{m}|\\leq 1$, an appeal to Lemma \\ref{lem:Scaling} gives a positive constant $C_{\\alpha}$ for which \n\\begin{equation*}\n|\\xi^{\\alpha}|\\leq C_{\\alpha}(R(\\xi)+1)\n\\end{equation*}\nfor all $\\xi\\in\\mathbb{V}^*$ where $\\xi^{\\alpha}=\\xi_{\\mathbf{v}^*}^\\alpha$ as in \\eqref{eq:Monomial}. Consequently, for each pair of natural numbers $n$ and $m$,\n\\begin{eqnarray*}\n\\|D_{\\mathbf{v}}^{\\alpha}f_n-D_{\\mathbf{v}}^{\\alpha}f_m\\|_2^2&=&\\int_{\\mathbb{V}}|D_{\\mathbf{v}}^{\\alpha}(f_n-f_m)(x)|^2\\,dx\\\\\n&=&\\int_{\\mathbb{V}^*}|\\xi^{\\alpha}(f_n-f_m)\\hat{\\,\\,}(\\xi)|^2\\,d\\xi\\\\\n&\\leq &C_{\\alpha}^2\\int_{\\mathbb{V}^*}|(R(\\xi)+1)(f_n-f_m)\\hat{\\,\\,}(\\xi)|^2\\,d\\xi\\\\\n&\\leq &C_\\alpha^2\\|(\\Lambda+1)(f_n-f_m)\\|_2^2\n\\end{eqnarray*}\nwhere we have used the fact that $\\{f_n\\}\\subseteq C_0^\\infty(\\mathbb{V})$. It now follows from the way the sequence $\\{f_n\\}$ was chosen that $\\{D_{\\mathbf{v}}^\\alpha f_n\\}$ is a Cauchy sequence in $L^2(\\mathbb{V})$ and so it converges to some limit $g_\\alpha$. Notice that, for each $\\phi\\in C_0^\\infty(\\mathbb{V})$, \n\\begin{eqnarray*}\n\\lefteqn{\\int_{\\mathbb{V}}g_\\alpha(x)\\phi(x)\\,dx=\\lim_{n\\to\\infty}\\int_{\\mathbb{V}}D_{\\mathbf{v}}^{\\alpha}f_n(x)\\phi(x)\\,dx}\\\\\n&=&\\lim_{n\\to\\infty}(-1)^{|\\alpha|}\\int_{\\mathbb{V}}f_n(x) D_{\\mathbf{v}}^{\\alpha}\\phi(x)\\,dx=(-1)^{|\\alpha|}\\int_{\\mathbb{V}}f(x)D_{\\mathbf{v}}^{\\alpha}\\phi(x)\\,dx\n\\end{eqnarray*}\nand thus $D_{\\mathbf{v}}^{\\alpha}f=g_\\alpha\\in L^2(\\mathbb{V})$. Since this is true for each $\\alpha$ such that $|\\alpha:2\\mathbf{m}|\\leq 1$, we have $f\\in W_{\\mathbf{v}}^{2\\mathbf{m},2}(\\mathbb{V})$.\n\nConversely, let $f\\in W_{\\mathbf{v}}^{2\\mathbf{m},2}(\\mathbb{V})$ and, given the density of $C_0^\\infty(\\mathbb{V})$ in $W_{\\mathbf{v}}^{2\\mathbf{m},2}(\\mathbb{V})$, let $\\{f_n\\}$ be a sequence of $C^\\infty_0$ functions for which $f_n \\to f$ in $W_{\\mathbf{v}}^{2\\mathbf{m},2}(\\mathbb{V})$. Consequently, we have $D_{\\mathbf{v}}^{\\alpha}f_n \\to D_{\\mathbf{v}}^{\\alpha}f$ in $L^2(\\mathbb{V})$ for each multi-index $\\alpha$ for which $|\\alpha:2\\mathbf{m}|\\leq 1$. In particular, $f_n\\to f$ and \n\\begin{equation*}\n\\lim_{n\\to\\infty}\\Lambda f_n=\\lim_{n\\to\\infty}\\sum_{|\\alpha:\\mathbf{m}|=2}a_{\\alpha}D_{\\mathbf{v}}^{\\alpha}f_n=\\sum_{|\\alpha:\\mathbf{m}|=2}a_{\\alpha}D_{\\mathbf{v}}^{\\alpha}f\n\\end{equation*}\nin $L^2(\\mathbb{V})$. As $\\Lambda$ is self-adjoint, it is closed and so necessarily $f\\in \\mbox{\\rm Dom}(\\Lambda)$. \n\\end{proof}\n\n\\section{Ultracontractivity and Sobolev-type inequalities}\n\nIn this section we show that (self-adjoint) positive-homogeneous operators have many desirable properties shared by elliptic operators. In particular, for a self-adjoint positive-homogeneous operator $\\Lambda$, we will prove corresponding Nash and Gagliardo-Nirenberg inequalities. \\\\\n\n\\noindent Let $\\Lambda$ be a self-adjoint positive-homogeneous operator on $\\mathbb{V}$ with symbol $R$ and homogeneous order $\\mu_{\\Lambda}$. In view of Proposition \\ref{prop:DirichletOperator}, $\\Lambda$ determines a self-adjoint positive-homogeneous operator on $L^2(\\mathbb{V})$, $\\Lambda_{\\mathbb{V}}$. By an abuse of notation we shall write $\\Lambda=\\Lambda_{\\mathbb{V}}$ and $Q_{\\Lambda_{\\mathbb{V}}}=Q_{\\Lambda}$. Using the spectral calculus, define the semigroup $\\{e^{-t\\Lambda}\\}$; this is a $C_0$-contraction semigroup of self-adjoint operators on $L^2(\\mathbb{V})$. It should be no surprise that the semigroup $e^{-t\\Lambda}$, defined here by the spectral calculus, coincides with that given by the Fourier transform; this, in particular, is verified by the following lemma.\n\n\\begin{lemma}\\label{lem:Ultracontractivity}\nFor $f\\in L^2(\\mathbb{V})$ and $t>0$,\n\\begin{equation}\\label{convolutionsemigroupeq}\n\\left(e^{-t\\Lambda}f\\right)(x)=\\int_{\\mathbb{V}}K_{\\Lambda}(t,x-y)f(y)dy\n\\end{equation}\nalmost everywhere, where $K_{\\Lambda}(t,x)=(e^{-tR})^{\\vee}(x)\\in\\mathcal{S}(\\mathbb{V})$. For each $t>0$, this formula extends $\\{e^{-t\\Lambda}\\}$ to a bounded operator from $L^p(\\mathbb{V})$ to $L^q(\\mathbb{V})$ for any $1\\leq p,q\\leq \\infty$. Furthermore, for each $1\\leq p,q\\leq\\infty$, there exists $C_{p,q}>0$ such that\n\\begin{equation*}\n\\|e^{-t\\Lambda}\\|_{p\\rightarrow q}\\leq \\frac{C_{p,q}}{t^{\\mu_{\\Lambda}(1\/p-1\/q)}}\n\\end{equation*}\nfor all $t>0$. In particular, the semigroup is ultracontractive with\n\\begin{equation*}\n\\|e^{-t\\Lambda}\\|_{2\\rightarrow\\infty}\\leq \\frac{C_{2,\\infty}}{t^{\\mu_{\\Lambda}\/2}}\n\\end{equation*}\nfor all $t>0$. \n\\end{lemma}\n\n\\begin{remark}\\label{rmk:Ultracontractivity}\nA $C_0$-semigroup $\\{T_t\\}$ of self-adjoint operators on $L^2$ is said to be ultracontractive if, for each $t>0$, $T_t$ is a bounded operator from $L^2$ to $L^\\infty$. We note that this condition immediately implies (by duality) that, for each $t>0$, $T_t$ is a bounded operator from $L^1$ to $L^\\infty$ and this is often (though not exclusively, e.g., \\cite{Gross1993}) taken to be the definition of ultracontractivity, see \\cite{Coulhon1996}. Our terminology is not meant to imply (as it does in the case of Markovian semigroups) that the semigroup is contractive on $L^p$ for any $p$; it usually isn't.\n\\end{remark}\n\n\\begin{proof}[Proof of Lemma \\ref{lem:Ultracontractivity}]\nWe first verify the representation formula \\eqref{convolutionsemigroupeq}. Using the Fourier transform, one sees easily that convolution by $K_{\\Lambda}$ defines a $C_0$-contraction semigroup on $L^2(\\mathbb{V})$ of self-adjoint operators. Denote this semigroup and its corresponding generator by $T_t$ and $A$ respectively and note that $A$ is necessarily self-adjoint. For each $f\\in C_0^{\\infty}(\\mathbb{V})$, observe that\n\\begin{equation*}\n\\lim_{t\\rightarrow 0}\\left\\|t^{-1}\\left(T_tf-f\\right)+\\Lambda f\\right\\|_2=\\lim_{t\\rightarrow 0}\\left\\|\\left(t^{-1}(e^{-tR(\\xi)}-1)+R(\\xi)\\right )\\hat{f}(\\xi)\\right\\|_{2^*}=0\n\\end{equation*}\nwhere we have appealed to the dominated convergence theorem and the fact that $\\mathcal{F}(\\Lambda f)=R\\hat{f}$. Consequently, $C_0^{\\infty}(\\mathbb{V})\\subseteq\\mbox{\\rm Dom}(A)$ and $Af=-\\Lambda f$ for all $f\\in C_{0}^{\\infty}(\\mathbb{V})$. In view of Proposition \\ref{prop:esa}, $\\Lambda\\vert_{C_0^{\\infty}(\\mathbb{V})}$ is essentially self-adjoint and so it must be the case that $A=-\\Lambda$ and hence $T_t=e^{-\\Lambda t}$ as claimed.\n\n\nFinally, we establish the $L^p\\rightarrow L^q$ estimates for $\\{e^{-t\\Lambda}\\}$. In view of the representation \\eqref{convolutionsemigroupeq} and Young's inequality for convolution,\n\\begin{equation*}\n\\|e^{-t\\Lambda}\\|_{p\\to q}\\leq \\|K_{\\Lambda}(t,\\cdot)\\|_s\n\\end{equation*}\nwhere $1-\\frac{1}{s}=\\frac{1}{p}-\\frac{1}{q}$. For $t>0$ and $E\\in \\Exp(\\Lambda)$, we have\n\\begin{eqnarray*}\nK_\\Lambda(t,x)&=&\\int_{\\mathbb{V}^*}e^{-tR(\\xi)}e^{-i\\xi(x)}\\,d\\xi=\\int_{\\mathbb{V}^*}e^{-R(t^{E^*}\\xi)}e^{-i\\xi(x)}\\,d\\xi\\\\\n&=&t^{-\\tr E^*}\\int_{\\mathbb{V}^*}e^{-R(\\xi)}e^{-i (t^{-E^*}\\xi)(x)}\\,dx\\\\\n&=&t^{-\\mu_{\\Lambda}}K_{\\Lambda}(1,t^{-E}x)\n\\end{eqnarray*}\nfor $x\\in\\mathbb{V}$ where we made a change of variables $\\xi\\mapsto t^{-E^*}\\xi$. By making the analogous change of variables $x\\mapsto t^Ex$, we obtain\n\\begin{eqnarray*}\n\\lefteqn{\\|K_\\Lambda(t,\\cdot)\\|_s=t^{-\\mu_\\Lambda}\\|K_{\\Lambda}(1,t^E(\\cdot))\\|_s}\\\\\n&=&t^{-\\mu_{\\Lambda}+\\mu_{\\Lambda}\/s}\\|K_{\\Lambda}(1,\\cdot)\\|_s=t^{-\\mu_{\\Lambda}(1\/p-1\/q)}\\|K_{\\Lambda}(1,\\cdot)\\|_s\n\\end{eqnarray*}\nfor $t>0$. The desired result follows by taking $C_{p,q}=\\|K_{\\Lambda}(1,\\cdot)\\|_s$ where $s=(1+1\/q-1\/p)^{-1}$.\n\\end{proof}\n\n\n\n\n\\begin{proposition}[Nash's inequality]\nLet $\\Omega$ be a non-empty open subset of $\\mathbb{V}$ and let $\\Lambda$ be a symmetric positive-homogeneous operator with homogeneous order $\\mu_{\\Lambda}$. We consider the self-adjoint operator $\\Lambda_{\\Omega}$ and its form $Q_{\\Lambda_\\Omega}$ given by Proposition \\ref{prop:DirichletOperator}. There exists $C>0$ such that\n\\begin{equation*}\n\\|f\\|_{L^2(\\Omega)}^{1+1\/\\mu_{\\Lambda}}\\leq C Q_{\\Lambda_{\\Omega}}(f)^{1\/2}\\|f\\|_{L^1(\\Omega)}^{1\/\\mu_{\\Lambda}}\n\\end{equation*}\nfor all $f\\in \\mbox{\\rm Dom}(Q_{\\Lambda_{\\Omega}})\\cap L^1(\\Omega)$.\n\\end{proposition}\n\\begin{proof}\nIt suffices to prove the estimate when $\\Omega=\\mathbb{V}$, for the general result follows from the isometric embedding of $W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$ into $W_{\\mathbf{v}}^{\\mathbf{m},2}(\\mathbb{V})$, c.f., Lemma \\ref{lem:SobolevEmbedding}, and that of $L^1(\\Omega)$ into $L^1(\\mathbb{V})$. Again, we will denote $\\Lambda_{\\mathbb{V}}$ and $Q_{\\Lambda_\\mathbb{V}}$ by $\\Lambda$ and $Q_\\Lambda$ respectively. In view of Lemma \\ref{lem:Ultracontractivity}, the self-adjointness of $\\Lambda$ and duality give $C'>0$ such that\n\\begin{equation*}\n\\|e^{-t\\Lambda}\\|_{1\\rightarrow 2}\\leq \\frac{C'}{t^{\\mu_{\\Lambda}\/2}}\n\\end{equation*}\nfor all $t>0$. Thus for any $f\\in \\mbox{\\rm Dom}(Q_{\\Lambda})\\cap L^1(\\mathbb{V})$,\n\\begin{eqnarray}\\label{ultraimplynasheq}\\nonumber\n\\|f\\|_2&\\leq&\\|e^{-t\\Lambda}f-f\\|_2+\\|e^{-t\\Lambda}f\\|_2\\\\\\nonumber\n&\\leq&\\left\\|\\int_0^{t}\\frac{d}{ds}e^{-s\\Lambda}fds\\right\\|_2+\\frac{C'}{t^{\\mu_{\\Lambda}\/2}}\\|f\\|_1\\\\\\nonumber\n&\\leq&\\int_{0}^{t}\\|\\Lambda^{1\/2}e^{-s\\Lambda}\\Lambda^{1\/2}f\\|_2ds+\\frac{C'}{t^{\\mu_{\\Lambda}\/2}}\\|f\\|_1\\\\\n&\\leq& \\int_0^t\\|\\Lambda^{1\/2}e^{-s\\Lambda}\\|_{2\\rightarrow 2}dsQ_{\\Lambda}(f)^{1\/2}+\\frac{C'}{t^{\\mu_{\\Lambda}\/2}}\\|f\\|_1\\\\\\nonumber\n\\end{eqnarray}\nfor all $t>0$. By virtue of the spectral theorem, we have\n\\begin{equation*}\n\\|\\Lambda^{1\/2}e^{-s\\Lambda}\\|_{2\\rightarrow 2}\\leq \\sup_{\\lambda>0}|\\lambda^{1\/2}e^{-s\\lambda}|\\leq \\frac{C''}{s^{1\/2}}\n\\end{equation*}\nfor all $s>0$ and therefore\n\\begin{equation*}\n\\|f\\|_2\\leq 2C''t^{1\/2}Q_{\\Lambda}(f)^{1\/2}+\\frac{C'}{t^{\\mu_{\\Lambda}}}\\|f\\|_1\n\\end{equation*}\nfor all $t>0$. The result follows by optimizing the above inequality and noting that $\\mu_{\\Lambda}>0$. \n\\end{proof}\n\\noindent Suppose additionally that $\\mu_{\\Lambda}<1$. Using ultracontractivity directly, a calculation analogous to \\eqref{ultraimplynasheq} yields\n\\begin{eqnarray*}\n\\|f\\|_{\\infty}&\\leq&\\int_0^t\\|e^{-s\\Lambda\/2}\\|_{2\\rightarrow\\infty}\\|\\Lambda^{1\/2}e^{-s\\Lambda\/2}\\|_{2\\rightarrow 2}Q_{\\Lambda}(f)^{1\/2}\\,ds+\\frac{C}{t^{\\mu_{\\Lambda}\/2}}\\|f\\|_2\\\\\n&\\leq&C't^{(1-\\mu_{\\Lambda})\/2}Q_{\\Lambda}(f)^{1\/2}+\\frac{C}{t^{\\mu_{\\Lambda}\/2}}\\|f\\|_2\\\\\n\\end{eqnarray*}\nfor $f\\in C_0^{\\infty}(\\Omega)$ and $t>0$. Upon optimizing with respect to $t$ and using the density of $C_0^{\\infty}(\\Omega)$ in $W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$, we obtain the following lemma:\n \n\\begin{lemma}\\label{nashlikelem}\nIf $\\mu_{\\Lambda}<1$ then there is $C>0$ such that for all $f\\in W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$, $f\\in L^\\infty(\\Omega)$ and\n\\begin{equation*}\n\\|f\\|_{L^\\infty(\\Omega)}\\leq CQ_{\\Lambda_{\\Omega}}(f)^{\\mu_{\\Lambda}\/2}\\|f\\|_{L^2(\\Omega)}^{1-\\mu_{\\Lambda}}.\n\\end{equation*}\n\\end{lemma}\n\n\n\\noindent Lemma \\ref{nashlikelem} is the analog of the Gagliardo-Nirenberg inequality in our setting.\n\n\n\\section{Fundamental Hypotheses}\\label{sec:FundamentalHypotheses}\n\nLet $\\Omega$ be a non-empty open subset of $\\mathbb{V}$. In this section, we will introduce three hypotheses concerning a symmetric sesquilinear form $Q$ (also called Hermitian form) defined on $C_0^\\infty(\\Omega)$ viewed as a subspace of the Hilbert space $L^2(\\Omega)$. The first hypothesis will guarantee that the form is closable and its closure is associated to a self-adjoint operator $H$ on $L^2(\\Omega)$. It is under these hypotheses that we will be able to establish the existence of the heat kernel for $H$ and prove corresponding off-diagonal estimates. Our construction is based on E. B. Davies' article \\cite{Davies1995}, wherein a general class of higher order self-adjoint uniformly elliptic operators on $\\mathbb{R}^d$ is studied. In what follows (and for the next three sections) $\\|\\cdot\\|_2$ denotes the $L^2(\\Omega)$ norm, $\\langle\\cdot,\\cdot\\rangle$ denotes its inner product. All mentions of a positive-homogeneous operator $\\Lambda$ refer to the self-adjoint operator $\\Lambda_{\\Omega}$ of Proposition \\ref{prop:DirichletOperator}. Correspondingly, $Q_{\\Lambda_{\\Omega}}$ is denoted by $Q_{\\Lambda}$.\\\\ \n\n\n\n\n\n\\begin{hypothesis}\\label{hyp:Garding}\nLet $Q$ be as above. There exists a self-adjoint positive-homogeneous operator $\\Lambda$ with corresponding symmetric sesquilinear form $Q_{\\Lambda}$ such that\n\\begin{equation}\\label{eq:Garding}\n\\frac{1}{2}Q_{\\Lambda}(f)\\leq Q(f)\\leq C(Q_{\\Lambda}(f)+||f||_2^2)\n\\end{equation}\nfor all $f\\in C_0^{\\infty}(\\Omega)$ where $C\\geq 1$. \n\\end{hypothesis}\n\n\\noindent As noticed above, Hypothesis \\ref{hyp:Garding} guarantees that $Q$ is bounded below and therefore closable. Its closure, which we still denote by $Q$, defines uniquely a self-adjoint operator $H$; we refer to $H$ as the operator associated to $Q$. Hypothesis \\ref{hyp:Garding} is a comparability statement between $H$ and the positive-homogeneous operator $\\Lambda$; for this reason, we say that $\\Lambda$ is a \\emph{reference operator} for $H$ (and for $Q$). In this way, \\eqref{eq:Garding} is analogous to G\\r{a}rding's inequality in that the latter compares second-order elliptic operators to the Laplacian.\n\n\\begin{remark} Necessarily, $C_0^\\infty(\\Omega)$ is a core for $Q$ and we have \n\\begin{equation*}\\mbox{\\rm Dom}(H)\\cup C_0^\\infty(\\Omega)\\subseteq \\mbox{\\rm Dom}(Q)\\subseteq L^2(\\Omega).\n\\end{equation*}\nIt may however be the case that $\\mbox{\\rm Dom}(H)\\cap C_0^\\infty(\\Omega)=\\{0\\}$, c.f., \\cite{Davies1997}.\n \\end{remark}\n\n\\noindent The inequality $\\eqref{eq:Garding}$ further ensures that $\\mbox{\\rm Dom}(Q)=\\mbox{\\rm Dom}(Q_{\\Lambda})$ and that $H\\geq 0$. In view of Proposition \\ref{prop:DirichletOperator}, there exist $\\mathbf{m}\\in\\mathbb{N}^d$ and a basis $\\mathbf{v}$ of $\\mathbb{V}$ such that \n\\begin{equation*}\n\\mbox{\\rm Dom}(Q)=\\mbox{\\rm Dom}(Q_{\\Lambda})=W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega) \n\\end{equation*}\nand, because $C_0^\\infty(\\Omega)$ is dense in $W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$, \\eqref{eq:Garding} holds for all $f$ in this common domain. These remarks are summarized in the following lemma:\n\n\\begin{lemma}\\label{gardingsobolevlem}\nLet $Q$ satisfy Hypothesis \\ref{hyp:Garding} with reference operator $\\Lambda$. The associated operator $H$ is non-negative and\n\\begin{equation*}\n\\mbox{\\rm Dom}(Q)=W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)\n\\end{equation*}\nwhere $\\mathbf{m}$ and $\\mathbf{v}$ are those associated with $\\Lambda$ via Proposition \\ref{prop:DirichletOperator}. Moreover, \\eqref{eq:Garding} holds for all $f$ in this common domain.\n\\end{lemma}\n\n\\noindent In view of the preceding lemma, any future reference to a sesquilinear form $Q$ which satisfies Hypothesis \\ref{hyp:Garding} with reference operator $\\Lambda$ is a reference to the closed form $Q$ whose domain is characterized by Lemma \\ref{gardingsobolevlem} and has associated self-adjoint operator $H$. For the most part, as is done in \\cite{Davies1995}, we will avoid identifing $\\mbox{\\rm Dom}(H)$ as it generally won't be necessary. By virtue of Lemma \\ref{gardingsobolevlem} and Theorem 1.53 of \\cite{Ouhabaz2009}, $-H$ generates a strongly continuous semigroup $T_t=e^{-tH}$ on $L^2(\\Omega)$ which is a bounded holomorphic semigroup on a non-trivial sector of $\\mathbb{C}$. The main goal of this article is to show that the semigroup $T_t$ has an integral kernel $K_H$ satisfying off-diagonal estimates in terms of the Legendre-Fenchel transform of $R$; we refer the reader to Section 3 of \\cite{Randles2017} and Appendix \\ref{Appendix:LF} of this article for the definition and useful properties of the Legendre-Fenchel transform of $R$. Under the hypotheses given in this section, we obtain these off-diagonal estimates by means of Davies' perturbation method, suitably adapted to our naturally anisotropic setting. Specifically, we study perturbations of the semigroup $T_t$ formed by conjugating $T_t$ by ``nice\" operators. Denoting by $C^\\infty(\\Omega,\\Omega)$ the set of smooth functions mapping $\\Omega$ into itself, we set\n\\begin{equation*}\nC_{\\infty}^{\\infty}(\\Omega,\\Omega)=\\{\\phi\\in C^{\\infty}(\\Omega,\\Omega): \\partial_v^k(\\lambda(\\phi))\\in L^{\\infty}(\\Omega)\\\\,\\forall \\,v\\in\\mathbb{V},\\lambda\\in\\mathbb{V}^*\\mbox{ and }k\\geq 0\\}.\n\\end{equation*}\nGiven $\\phi\\in C_{\\infty}^{\\infty}(\\Omega,\\Omega)$ and $\\lambda\\in\\mathbb{V}^*$, we consider the smooth functions $e^{\\lambda(\\phi)}$ and $e^{-\\lambda(\\phi)}$; these will act as bounded and real-valued multiplication operators on $L^2(\\Omega)$. For each such $\\lambda$ and $\\phi$, we define the \\textit{twisted} semigroup $T^{\\lambda,\\phi}_t$ on $L^2(\\Omega)$ by\n\\begin{equation*}\nT_t^{\\lambda,\\phi}=e^{\\lambda(\\phi)}T_te^{-\\lambda(\\phi)}\n\\end{equation*}\nfor $t>0$. For any $f\\in L^2(\\Omega)$ such that $e^{-\\lambda(\\phi)}f\\in \\mbox{\\rm Dom}(H)$, observe that\n\\begin{eqnarray*}\ne^{\\lambda(\\phi)}(-H)e^{-\\lambda(\\phi)}f&=&e^{\\lambda(\\phi)}\\lim_{t\\rightarrow 0}\\frac{T_t(e^{-\\lambda(\\phi)}f)-(e^{-\\lambda(\\phi)}f)}{t}\\\\\n&=&\\lim_{t\\rightarrow 0}\\frac{T^{\\lambda,\\phi}_tf-f}{t}\n\\end{eqnarray*}\nwhere we have used the fact that $e^{\\lambda(\\phi)}$ acts as a bounded multiplication operator on $L^2(\\Omega)$. Upon pushing this argument a little further one sees that $T_t^{\\lambda,\\phi}$ has infinitesimal generator $-H_{\\lambda,\\phi}=-e^{\\lambda(\\phi)}He^{-\\lambda(\\phi)}=e^{\\lambda(\\phi)}(-H)e^{-\\lambda(\\phi)}$ and \n\\begin{equation*}\n\\mbox{\\rm Dom}(H_{\\lambda,\\phi})=\\left\\{f\\in L^2(\\Omega): e^{-\\lambda(\\phi)}f\\in\\mbox{\\rm Dom}(H)\\right\\}.\n\\end{equation*}\nWe also note that, in view of the resolvent characterization of bounded holomorphic semigroups, e.g., Theorem 1.45 of \\cite{Ouhabaz2009}, it is straightforward to verify that $\\{T_t^{\\lambda,\\phi}\\}$ is a bounded holomorphic semigroup on $L^2(\\Omega)$.\n\\begin{remark}\nThis construction for $T^{\\lambda,\\phi}_t$ is similar to that done in \\cite{Davies1995}. The difference being that $\\lambda$ for us is a ``multi-parameter'' whereas in \\cite{Davies1995} it is a scalar. This construction is the basis behind the suitable adaptation of Davies' method for positive-homogeneous operators, discussed in the introductory section of this article. \n\\end{remark}\n\n\\noindent In the same spirit, define \\textit{twisted} form $Q_{\\lambda,\\phi}$ by\n\\begin{equation*}\nQ_{\\lambda,\\phi}(f,g)=Q(e^{-\\lambda(\\phi)}f,e^{\\lambda(\\phi)}g)\n\\end{equation*}\nfor all $f,g\\in \\mbox{\\rm Dom}(Q_{\\lambda,\\phi}):=\\mbox{\\rm Dom}(Q)$. This definition is meaningful because multiplication by $e^{\\pm\\lambda(\\phi)}$ is continuous on $\\mbox{\\rm Dom}(Q)=W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$. As usual, we write $Q_{\\lambda,\\phi}(f)=Q_{\\lambda,\\phi}(f,f)$ for $f\\in\\mbox{\\rm Dom}(Q_{\\lambda,\\phi})$ and we note that $Q_{\\lambda,\\phi}$ isn't symmetric or real-valued. As the next lemma shows, $H_{\\lambda,\\phi}$ corresponds to $Q_{\\lambda,\\phi}$ in the usual sense.\n\n\n\\begin{lemma}\\label{formgeneratorlambdaphilem}\nFor any $\\lambda\\in\\mathbb{V}^*$ and $\\phi\\in C_{\\infty}^{\\infty}(\\Omega,\\Omega)$,\n\\begin{equation*}\n\\mbox{\\rm Dom}(H_{\\lambda,\\phi})\\subseteq \\mbox{\\rm Dom}(Q_{\\lambda,\\phi})=\\mbox{\\rm Dom}(Q)\n\\end{equation*}\nand\n\\begin{equation*}\nQ_{\\lambda,\\phi}(f)=\\langle H_{\\lambda,\\phi}f,f\\rangle\n\\end{equation*}\nfor all $f\\in\\mbox{\\rm Dom}(H_{\\lambda,\\phi})$.\n\\end{lemma}\n\\begin{proof}\nFor $f\\in\\mbox{\\rm Dom}(H_{\\lambda,\\phi})$,\n\\begin{equation*}\n e^{-\\lambda(\\phi)}f\\in \\mbox{\\rm Dom}(H)\\subseteq\\mbox{\\rm Dom}(Q)=W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega).\n\\end{equation*}\nBecause $\\phi\\in C_{\\infty}^{\\infty}(\\Omega,\\Omega)$, $\\partial_{v_i}^ke^{\\lambda(\\phi)}\\in L^{\\infty}(\\Omega)$ for all $i=1,2,\\dots, d$ and $k\\geq 0$ . Using the Leibniz rule it follows that\n\\begin{equation*}\nf=e^{\\lambda(\\phi)}(e^{-\\lambda(\\phi)}f)\\in W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)=\\mbox{\\rm Dom}(Q_{\\lambda,\\phi}).\n\\end{equation*}\nWe see that,\n\\begin{equation*}\n\\langle H_{\\lambda,\\phi}f,f\\rangle=\\langle H(e^{-\\lambda(\\phi)}f),e^{\\lambda(\\phi)}f\\rangle=Q(e^{-\\lambda(\\phi)}f,e^{\\lambda(\\phi)}f)=Q_{\\lambda,\\phi}(f)\n\\end{equation*}\nas desired.\n\\end{proof}\n\n\\noindent Our second fundamental hypothesis is as follows:\n\n\\begin{hypothesis}\\label{hyp:FormCompare}\n\nLet $Q$ satisfy Hypothesis \\ref{hyp:Garding} with reference operator $\\Lambda$. There exist $\\mathcal{E}\\subseteq C_{\\infty}^{\\infty}(\\Omega,\\Omega)$ and $M>0$ such that:\n\\begin{enumerate}[label=\\roman*]\n\\item For each pair $x,y\\in\\Omega$, there is $\\phi\\in \\mathcal{E}$ for which $\\phi(x)-\\phi(y)=x-y$.\n\\item For all $\\phi\\in\\mathcal{E}$, $\\lambda\\in\\mathbb{V}^*$ and $f\\in\\mbox{\\rm Dom}(Q)$,\n\\begin{equation}\\label{eq:FormCompare1}\n|Q_{\\lambda,\\phi}(f)-Q(f)|\\leq \\frac{1}{4}(Q(f)+M(1+R(\\lambda))\\|f\\|_2^2)\n\\end{equation}\n\\end{enumerate}\nwhere $R$ is the symbol of $\\Lambda$. We will call \\eqref{eq:FormCompare1} the form comparison inequality.\n\\end{hypothesis}\n\n\\noindent Our next lemma follows immediately from Lemma \\ref{formgeneratorlambdaphilem} and Hypothesis \\ref{hyp:FormCompare}. Its proof is omitted.\n\\begin{lemma}\\label{lowboundfortwistedformlem}\nLet $\\phi\\in\\mathcal{E}$ and $\\lambda\\in\\mathbb{V}^*$. If Hypothesis \\ref{hyp:FormCompare} holds,\n\\begin{equation}\\label{Htwistedlemmaeq}\n2\\Re[Q_{\\lambda,\\phi}(f)]=2\\Re[(H_{\\lambda,\\phi}f,f)]\\geq -\\frac{M}{2}(1+R(\\lambda))\\|f\\|_2^2\n\\end{equation}\nfor all $f\\in\\mbox{\\rm Dom}(H_{\\lambda,\\phi})$.\n\\end{lemma}\n\n\\noindent Our final hypothesis is more technical and involves a perturbation estimate for sufficiently high powers of $H$, the self-adjoint operator associated to $Q$. Whereas Hypothesis \\ref{hyp:Garding} and \\ref{hyp:FormCompare} are easily satisfied, the third hypothesis is much more subtle, difficult to verify and restrictive.\n\n\\begin{hypothesis}\\label{hyp:kappa}\nLet $Q$ satisfy Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} with reference operator $\\Lambda$ and associated self-adjoint operator $H$. Further, let $R$ be the symbol and $\\mu_{\\Lambda}$ be the homogeneous order of $\\Lambda$, respectively. Set $\\kappa=\\min\\{n\\in\\mathbb{N}:\\mu_{\\Lambda}\/n<1\\}$ and denote by $Q_{\\Lambda^{\\kappa}}$ the sesquilinear form corresponding to $\\Lambda^{\\kappa}$. There is $C>0$ such that, for any $\\phi\\in\\mathcal{E}$ and $\\lambda\\in\\mathbb{V}^*$,\n\\begin{equation*}\n\\mbox{\\rm Dom}(H^{\\kappa}_{\\lambda,\\phi})\\subseteq \\mbox{\\rm Dom}(Q_{\\Lambda^{\\kappa}})\n\\end{equation*}\nand \n\\begin{equation*}\nQ_{\\Lambda^{\\kappa}}(f)\\leq C(|\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle|+(1+R(\\lambda))^{\\kappa}\\|f\\|_2^2)\n\\end{equation*}\nfor all $f\\in \\mbox{\\rm Dom}(H_{\\lambda,\\phi}^{\\kappa})$.\n\\end{hypothesis}\n\n\n\n\\noindent In \\cite{Davies1995}, the self-adjoint operators considered are required to satisfy Hypothesis \\ref{hyp:Garding} in the special case that $\\Lambda=(-\\Delta)^m$ on $\\mathbb{R}^d$ for some $m\\in\\mathbb{N}$. The theory in \\cite{Davies1995} proceeds under only two hypotheses which are paralleled by Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} above respectively. Incidentally, off-diagonal estimates are only shown in the case that $2m0$ refers to that which is specified in Hypothesis \\ref{hyp:FormCompare}. Positive constants denoted by $C$ will change from line to line.\n\n\\begin{lemma}\\label{twistedsgboundlemma}\nFor any $\\lambda\\in\\mathbb{V}^*$ and $\\phi\\in\\mathcal{E}$,\n\\begin{equation*}\n\\|T_t^{\\lambda,\\phi}\\|_{2\\rightarrow 2}\\leq \\exp(M(1+R(\\lambda))t\/4)\n\\end{equation*}\nfor all $t>0$.\n\\end{lemma}\n\\begin{proof}\nFor $f\\in L^2(\\Omega)$, put $f_t=T_t^{\\lambda,\\phi}f$. By Lemma \\ref{lowboundfortwistedformlem},\n\\begin{equation*}\n\\frac{d}{dt}\\|f_t\\|_2^2=-2\\Re[(H_{\\lambda,\\phi}f_t,f_t)]\\leq\\frac{M}{2}(1+R(\\lambda))\\|f_t\\|_2^2.\n\\end{equation*}\nThe result now follows from Gr\\\"{o}nwall's lemma.\n\\end{proof}\n\n\\begin{lemma}\\label{twistedgenandsgboundlemma}\nThere exists $C>0$ such that\n\\begin{equation*}\n\\|H_{\\lambda,\\phi}T_t^{\\lambda,\\phi}\\|_{2\\rightarrow 2}\\leq\\frac{C}{t}\\exp\\left(\\frac{M}{2}(1+R(\\lambda))t\\right)\n\\end{equation*}\nfor all $t>0$, $\\lambda\\in\\mathbb{V}^*$ and $\\phi\\in\\mathcal{E}$. \n\\end{lemma}\n\\begin{proof}\nOur argument uses the theory of bounded holomorphic semigroups, c.f. \\cite{Davies1980}. For $f\\in L^2(\\Omega)$, $r>0$ and $|\\theta|\\leq \\pi\/3$ put\n\\begin{equation*}\nf_r=\\exp[-re^{i\\theta}H_{\\lambda,\\phi}]f.\n\\end{equation*}\nIt follows that $f_r\\in\\mbox{\\rm Dom}(H_{\\lambda,\\phi})$ and\n\\begin{eqnarray*}\n\\frac{d}{dr}\\|f_r\\|_2^2&=&-e^{i\\theta}(H_{\\lambda,\\phi}f_r,f_r)-e^{-i\\theta}(f_r,H_{\\lambda,\\phi}f_r)\\\\\n&=&-e^{i\\theta}Q_{\\lambda,\\phi}(f_r)-e^{-i\\theta}\\overline{Q_{\\lambda,\\phi}(f_r)}\\\\\n&=&-(e^{i\\theta}+e^{-i\\theta})Q(f_r)+D_r\n\\end{eqnarray*}\nwhere\n\\begin{equation*}\nD_r=-e^{i\\theta}[Q_{\\lambda,\\phi}(f_r)-Q(f_r)]-e^{-i\\theta}[\\overline{Q_{\\lambda,\\phi}(f_r)}-Q(f_r)].\n\\end{equation*}\nBy Hypothesis \\ref{hyp:FormCompare},\n\\begin{equation*}\n|D_r|\\leq (Q(f_r)+M(1+R(\\lambda))\\|f\\|_2^2)\/2\n\\end{equation*}\nand so with the observation that $e^{i\\theta}+e^{-i\\theta}\\geq 1$ for all $|\\theta|\\leq \\pi\/3$,\n\\begin{equation*}\n\\frac{d}{dr}\\|f_r\\|_2^2\\leq \\frac{M}{2}(1+R(\\lambda))\\|f\\|_2^2.\n\\end{equation*}\nHence,\n\\begin{equation*}\n\\|f_r\\|_2\\leq \\exp(M(1+R(\\lambda))r\/4)\\|f\\|_2\n\\end{equation*}\nin view of Gr\\\"{o}nwall's lemma. From the above estimate we have\n\\begin{eqnarray*}\n\\lefteqn{\\hspace{-1cm}\\|\\exp[-zH_{\\lambda,\\phi}-M(1+R(\\lambda))z]\\|_{2\\rightarrow 2}}\\\\\n&\\leq&\\exp(M(1+R(\\lambda))r\/4)\\exp(-M(1+R(\\lambda))\\Re(z)\/2)\\leq 1\n\\end{eqnarray*}\nfor all $z=re^{i\\theta}$ for $r>0$ and $|\\theta|\\leq \\pi\/3$ because $2\\Re(z)\\geq r$. Theorem 8.4.6 of \\cite{Davies1980} yields\n\\begin{equation*}\n\\|(H_{\\lambda,\\phi}+M(1+R(\\lambda))\/2)\\exp[-tH_{\\lambda,\\phi}-M(1+R(\\lambda))t\/2]\\|_{2\\rightarrow 2}\\leq \\frac{C'}{t}\n\\end{equation*}\nfor all $t>0$. It now follows that\n\\begin{equation*}\n\\|H_{\\lambda,\\phi}T_t^{\\lambda,\\phi}\\|_{2\\rightarrow 2}\\leq\\frac{C}{t}\\exp(M(1+R(\\lambda))t\/2)\n\\end{equation*}\nfor all $t>0$ where we have put $C=C'+2$.\n\\end{proof}\n\n\\begin{lemma}\\label{Hkappalemma}\nFor any $k\\in\\mathbb{N}$, there is $C> 0$ such that\n\\begin{equation*}\n\\|H_{\\lambda,\\phi}^ke^{-tH_{\\lambda,\\phi}}\\|_{2\\rightarrow 2}\\leq \\frac{C}{t^k}\\exp(M(1+R(\\lambda))t\/2)\n\\end{equation*}\nfor all $t>0$, $\\phi\\in\\mathcal{E}$ and $\\lambda\\in \\mathbb{V}^*$.\n\\end{lemma}\n\\begin{proof}\nAs $-H_{\\lambda,\\phi}$ is the generator of the semigroup $e^{-tH_{\\lambda,\\phi}}$, for any $t>0$ and $f\\in L^2(\\Omega)$, $e^{-tH_{\\lambda,\\phi}}f\\in \\mbox{\\rm Dom}(H_{\\lambda,\\phi}^k)$. We have\n\\begin{equation*}\nH_{\\lambda,\\phi}^ke^{-tH_{\\lambda,\\phi}}=\\left(H_{\\lambda,\\phi}e^{-(t\/k)H_{\\lambda,\\phi}}\\right)^k\n\\end{equation*}\nand so by the previous lemma\n\\begin{equation*}\n\\|H_{\\lambda,\\phi}^ke^{-tH_{\\lambda,\\phi}}\\|_{2\\rightarrow 2}\\leq \\left(\\frac{C}{t}\\exp(M(1+R(\\lambda)t\/2k)\\right)^k\n\\end{equation*}\nfrom which the result follows.\n\\end{proof}\n\n\\section{Off-diagonal estimates}\n\n\\noindent In this section, we prove that the semigroup $T_t=e^{-tH}$ has an integral kernel $K_H$ and we deduce off-diagonal estimates for $K_H$. Here we shall assume the notation of the last section and, like before, all statements are to include Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} without explicit mention. \n\n\\begin{lemma}\\label{offdiagonalmachinelem}\nIf the twisted semigroup $T^{\\lambda,\\phi}_t$ satisfies the ultracontractive estimate\n \\begin{equation}\\label{twistedultracontraceq}\n \\|T^{\\lambda,\\phi}_t\\|_{2\\rightarrow \\infty}\\leq \\frac{C}{t^{\\mu_{\\Lambda}\/2}}\\exp[M(R(\\lambda)+1)t\/2]\n \\end{equation}\nfor all $\\lambda\\in \\mathbb{V}^*$, $\\phi\\in \\mathcal{E}$ and $t>0$ where $C,M>0$, then $T_t$ has integral kernel $K_H(t,x,y)=K_H(t,x,\\cdot)\\in L^1(\\Omega)$ satisfying the off-diagonal bound\n\\begin{equation*}\n|K_H(t,x,y)|\\leq \\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)\n\\end{equation*}\nfor all $x,y\\in\\mathbb{V}$ and $t>0$ where $R^{\\#}$ is the Legendre-Fenchel transform of $R$ and $M$ and $C$ are positive constants.\n\\end{lemma}\n\n\\begin{proof}\nIt is clear that the adjoint of $T_t^{\\lambda,\\phi}$ is $T_t^{-\\lambda,\\phi}$ and so by duality and \\eqref{twistedultracontraceq},\n\\begin{equation*}\n \\|T^{\\lambda,\\phi}_t\\|_{1\\rightarrow 2}\\leq \\frac{C}{t^{\\mu_{\\Lambda}\/2}}\\exp[M(R(\\lambda)+1)t\/2]\n\\end{equation*}\nfor $t>0$ where we have replaced $MR(-\\lambda)$ by $MR(\\lambda)$ in view of Proposition \\ref{prop:ComparePoly}. Thus for all $t>0$, $\\lambda\\in\\mathbb{V}^*$ and $\\phi\\in\\mathcal{E}$,\n\\begin{eqnarray*}\n\\|T^{\\lambda,\\phi}_t\\|_{1\\rightarrow \\infty}&\\leq&\\|T^{\\lambda,\\phi}_t\\|_{1\\rightarrow 2}\\|T^{\\lambda,\\phi}_t\\|_{2\\rightarrow \\infty}\\\\\n&\\leq&\\frac{C}{t^{\\mu_{\\Lambda}\/2}}\\exp[M(R(\\lambda)+1)t\/2]\\frac{C}{t^{\\mu_{\\Lambda}\/2}}\\exp[M(R(\\lambda)+1)t\/2]\\\\\n&\\leq&\\frac{C}{t^{\\mu_{\\Lambda}}}\\exp[Mt(R(\\lambda)+1)].\n\\end{eqnarray*}\nThe above estimate guarantees that $T_t^{\\lambda,\\phi}$ has integral kernel $K_H^{\\lambda,\\phi}(t,x,y)$ satisfying the same bound (see Theorem 2.27 of \\cite{Davies1980}). By construction, we also have \n\\begin{equation*}\nK_H^{\\lambda,\\phi}(t,x,y)=e^{-\\lambda(\\phi(x))}K_H(t,x,y)e^{\\lambda(\\phi(y))} \n\\end{equation*}\n where $K_H=K_H^{0,\\phi}$ is the integral kernel of $T_t=T_t^{0,\\phi}$. Therefore\n\\begin{equation*}\n|e^{-\\lambda(\\phi(x))}K_H(t,x,y)e^{\\lambda(\\phi(y))}|\\leq\\frac{C}{t^{\\mu_{\\Lambda}}}\\exp(Mt(R(\\lambda)+1))\n\\end{equation*}\nor equivalently\n\\begin{equation*}\n|K_H(t,x,y)|\\leq\\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(\\lambda(\\phi(y)-\\phi(x))+Mt(R(\\lambda)+1)\\right)\n\\end{equation*}\nfor all $t>0$, $x,y\\in\\Omega$, $\\lambda\\in\\mathbb{V}^*$ and $\\phi\\in\\mathcal{E}$. In view of Hypothesis \\ref{hyp:FormCompare}, for any $x$ and $y\\in\\Omega$ there is $\\phi\\in\\mathcal{E}$ for which $\\phi(x)=x$ and $\\phi(y)=y$. Consequently, we have that for all $x,y\\in\\Omega$, $\\lambda\\in\\mathbb{V}^*$ and $t>0$,\n\\begin{equation*}\n|K_H(t,x,y|\\leq\\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(\\lambda(y-x)+Mt(R(\\lambda)+1)\\right).\n\\end{equation*}\nThe proof of the lemma will be complete upon minimizing the above bound with respect to $\\lambda\\in\\mathbb{V}^*$. In this process, we shall see how the Legendre-Fenchel transform appears naturally. For any $x,y\\in\\Omega$ and $t>0$, we have\n\\begin{eqnarray*}\n |K_H(t,x,y)|&\\leq& \\frac{C}{t^{\\mu_{\\Lambda}}}\\inf_{\\lambda}\\{ \\exp\\left\\{\\lambda(y-x)+Mt(R(\\lambda)+1)\\right\\}\\}\\\\\n &\\leq&\\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-t\\sup_{\\lambda}\\left\\{\\lambda\\left(\\frac{x-y}{t}\\right)-MR(\\lambda)\\right\\}\\right)\\exp(Mt)\\\\\n &\\leq&\\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-t(MR)^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)\\\\\n &\\leq&\\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)\n\\end{eqnarray*}\nwhere we replaced $(MR)^{\\#}$ by $MR^{\\#}$ in view of Corollary \\ref{cor:MovingConstants}\n\\end{proof}\n\n\n\n\\begin{theorem}\\label{thm:Main}\nLet $Q$ satisfy Hypotheses \\ref{hyp:Garding}, \\ref{hyp:FormCompare} and \\ref{hyp:kappa} with reference operator $\\Lambda$ and associated self-adjoint operator $H$. Let $R$ be the symbol of $\\Lambda$ and $\\mu_{\\Lambda}$ be its homogeneous order. Then the semigroup $T_t=e^{-tH}$ has integral kernel $K_H:(0,\\infty)\\times \\Omega\\times \\Omega\\rightarrow \\mathbb{C}$ satisfying\n\\begin{equation}\\label{eq:Main}\n|K_H(t,x,y)|\\leq \\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)\n\\end{equation}\nfor all $x,y\\in\\Omega$ and $t>0$ where $R^{\\#}$ is the Legendre-Fenchel transform of $R$ and $C$ and $M$ are positive constants. \n\\end{theorem}\n\n\\begin{proof}\nTake $\\kappa$ as in Hypothesis \\ref{hyp:kappa}. We note that for all $f\\in \\mbox{\\rm Dom}(\\Lambda^{\\kappa})$,\n\\begin{equation*}\n\\|f\\|_{\\infty}\\leq CQ_{\\Lambda^{\\kappa}}(f)^{\\mu_{\\Lambda}\/2\\kappa}\\|f\\|_2^{1-\\mu_{\\Lambda}\/\\kappa}\n\\end{equation*}\nin view of Lemma \\ref{nashlikelem}. The application of the lemma is justified because $\\Lambda^{\\kappa}$ is positive-homogeneous with $\\kappa^{-1}\\Exp(\\Lambda^{\\kappa})=\\Exp(\\Lambda)$ and, as required, $\\mu_{\\Lambda^{\\kappa}}=\\mu_{\\Lambda}\/\\kappa<1$. For $f\\in L^2(\\Omega)$, set $f_t=T_t^{\\lambda,\\phi}f$. In view of Hypothesis \\ref{hyp:kappa} and Lemmas \\ref{twistedsgboundlemma} and \\ref{twistedgenandsgboundlemma}, we have\n\\begin{eqnarray*}\n\\|f_t\\|_{\\infty}&\\leq& Q_{\\Lambda^{\\kappa}}(f_t)^{\\mu_{\\Lambda}\/2\\kappa}\\|f_t\\|_2^{1-\\mu_{\\Lambda}\/\\kappa}\\\\\n&\\leq&C\\left(|\\langle H_{\\lambda,\\phi}^{\\kappa}f_t,f_t\\rangle|+(1+R(\\lambda))^{\\kappa}\\|f_t\\|_2^2\\right)^{\\mu_{\\Lambda}\/2\\kappa}\\|f_t\\|_2^{1-\\mu_{\\Lambda}\/\\kappa}\\\\\n&\\leq&C\\left(\\|H_{\\lambda,\\phi}^{\\kappa}f_t\\|_2\\|f_t\\|_2+(1+R(\\lambda))^{\\kappa}\\|f_t\\|_2^2\\right)^{\\mu_{\\Lambda}\/2\\kappa}\\|f_t\\|_2^{1-\\mu_{\\Lambda}\/\\kappa}\\\\\n&\\leq&C\\left(\\frac{\\exp(M(1+R(\\lambda))t\/4)}{t^\\kappa}+(1+R(\\lambda))^k\\right)^{\\mu_{\\Lambda}\/2\\kappa}\\\\\n& &\\hspace{2cm}\\times\\exp(M(1+R(\\lambda))t\/4)\\|f\\|_2\\\\\n&\\leq&\\frac{C}{t^{\\mu_{\\Lambda}\/2}}\\exp(M(1+R(\\lambda))t\/2)\\|f\\|_2\n\\end{eqnarray*}\nfor all $\\phi\\in\\mathcal{E}$ and $\\lambda\\in\\mathbb{V}^*$. In view of Lemma \\ref{offdiagonalmachinelem}, the theorem is proved.\n\\end{proof}\n\n\n\\section{Homogeneous Operators}\\label{sec:HHomogeneous}\n\nIn this short section, we show that the term $Mt$ in the heat kernel estimate of Theorem \\ref{thm:Main} can be removed when $H$, a generally variable-coefficient operator, is ``homogeneous\" in the sense given by Definition \\ref{def:HHomogeneousOperator} below. Our setting is that in which $\\Omega=\\mathbb{V}$ and we shall assume throughout this section that $\\mu_{\\Lambda}<1$. Our arguments follow closely to the work of G. Barbatis and E. B. Davies \\cite{Barbatis1996}. \\\\\n\n\\noindent Let $Q$ be a sesquilinear form on $L^2(\\mathbb{V})$ satisfying Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} with reference operator $\\Lambda$ and associated self-adjoint operator $H$. For any $E\\in\\Exp(\\Lambda)$ (which we keep fixed throughout this section), observe that\n\\begin{equation*}\n(U_sf)(x)=s^{\\mu_{\\Lambda}\/2}f(s^Ex)\n\\end{equation*}\ndefines a unitary operator $U_s$ on $L^2(\\mathbb{V})$ for each $s>0$ with $U_s^{*}=U_{1\/s}$. For each $s>0$, set\n\\begin{equation*}\nH_s=s^{-1}U_{s}^{*}H U_{s}.\n\\end{equation*}\nand note that $H_s$ is a self-adjoint operator on $L^2(\\mathbb{V})$. It is easily verified that the sesquilinear form $Q^s$ associated to $H_s$ has\n\\begin{equation*}\nQ^s(f,g)=s^{-1} Q(U_s f,U_s g)\n\\end{equation*}\nfor all $f,g$ in the common domain $\\mbox{\\rm Dom}(Q^s)=\\mbox{\\rm Dom}(Q)=\\mbox{\\rm Dom}(\\Lambda^{1\/2})$. As $Q^s$ is produced by rescaling $Q$, it is clear the $Q^s$ will satisfy Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare}. Let us isolate the following special situation:\n\\begin{definition}\\label{def:HHomogeneousOperator}\nAssuming the notation above, we say that $H$ is homogeneous provided that $Q^s$ satisfies Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} with the same constants as $Q$ for all $s>0$. In other words, $Q_s$ satisfies the estimates \\eqref{eq:Garding} and \\eqref{eq:FormCompare1} uniformly for $s>0$.\n\\end{definition}\n\n\\noindent We note that a positive-homogeneous operator $\\Lambda$ is homogeneous in the above sense, for our defining property of homogeneous constant-coefficient operators can be written equivalently as $\\Lambda_s=\\Lambda$ for all $s>0$. In the example section below, we will see that when $H$ is a variable-coefficient partial differential operator consisting only of ``principal terms\", the replacement of $H_s$ by $H$ amounts to a rescaling of the arguments of $H$'s coefficients.\n\n\\begin{theorem}\\label{thm:HHomogeneous}\nLet $Q$ be a sesquilinear form on $L^2(\\mathbb{V})$ satisfying Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} with reference operator $\\Lambda$ and associated self-adjoint operator $H$. Let $R$ and $\\mu_{\\Lambda}$ be the symbol and homogeneous order of $\\Lambda$, respectively. Assume further that $\\mu_{\\Lambda}<1$ and so Hypothesis \\ref{hyp:kappa} is automatically satisfied (in view of Proposition \\ref{prop:kappa}) and hence the conclusion to Theorem \\ref{thm:Main} is valid. If $H$ is homogeneous, then its heat kernel $K_{H}$ satisfies the estimate\n\\begin{equation*}\n|K_H(t,x,y)|\\leq \\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)\\right)\n\\end{equation*}\nfor all $x,y\\in\\mathbb{V}$ and $t>0$, where $C$ and $M$ are positive constants.\n\\end{theorem}\n\n\\begin{proof}\nUsing the fact that $U_s$ is unitary for each $s>0$, it follows that\n\\begin{equation*}\ne^{-tH_s}=e^{-ts^{-1}U_{1\/s}HU_s}=U_{1\/s}e^{-(t\/s)H}U_s\n\\end{equation*}\nfor $s,t>0$. Consequently, for $f\\in L^2(\\mathbb{V})$,\n\\begin{eqnarray*}\n\\left(e^{-tH_s}f\\right)(x)&=&\\int_{\\mathbb{V}}s^{-\\mu_{\\Lambda}}K_H(t\/s,s^{-E}x,y)s^{\\mu_{\\Lambda}}f(s^{E}y)\\,dy\\\\\n&=&s^{-\\mu_{\\Lambda}}\\int_{\\mathbb{V}}K_H(t\/s,s^{-E}x,s^{-E}y)f(y)\\,dy\n\\end{eqnarray*}\nfor $s,t>0$ and almost every $x\\in\\mathbb{V}$. Thus, $e^{-tH_s}$ has an integral kernel $K_H^s:(0,\\infty)\\times\\mathbb{V}\\times\\mathbb{V}\\rightarrow\\mathbb{C}$ satisfying\n\\begin{equation*}\nK_H^s(t,x,y)=s^{-\\mu_{\\Lambda}}K_H(t\/s,s^{-E}x,s^{-E}y)\n\\end{equation*}\nfor $x,y\\in\\mathbb{V}$. Equivalently,\n\\begin{equation*}\nK_H(t,x,y)=s^{\\mu_{\\Lambda}}K_H^s(st,s^{E}x,s^{E}y)\n\\end{equation*}\nfor $t,s>0$ and $x,y\\in\\mathbb{V}$. We now apply the same sequence of arguments to the self-adjoint operators $H_s$ and the semigroups $e^{-tH_s}$. Under the hypothesis that $H$ is homogeneous, a careful study reveals that each estimate in the sequence of lemmas preceding Theorem \\ref{thm:Main} and the estimates in the proof of Theorem \\ref{thm:Main} are independent of $s$. From this, we obtain positive constants $C$ and $M$ for which\n\\begin{equation*}\n|K_H^s(t,x,y)|\\leq \\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)\n\\end{equation*}\nfor all $t>0$ and $x,y\\in\\mathbb{V}$ and this holds uniformly for $s>0$. Consequently,\n\\begin{eqnarray*}\n|K_H(t,x,y)|&\\leq &s^{\\mu_{\\Lambda}}\\frac{C}{(st)^{\\mu_{\\Lambda}}}\\exp\\left(-(st)MR^{\\#}\\left(\\frac{s^{E}(x-y)}{st}\\right)+Mst\\right)\\\\\n&\\leq & \\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)+Mst\\right)\n\\end{eqnarray*}\nfor all $s,t>0$ and$x,y\\in\\mathbb{V}$ where we have used the fact that $I-E\\in\\Exp(R^{\\#})$. The desired estimate follows by letting $s\\rightarrow 0$.\n\\end{proof}\n\n\\section{Regularity of $K_H$}\\label{sec:KernelRegularity}\n\nIn this section, we discuss the regularity of the heat kernel $K_H$. Given a non-empty open subset $\\Omega$ of $\\mathbb{V}$, we assume that $Q$ is a sesquilinear form on $L^2(\\Omega)$ which satisfies Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} with reference operator $\\Lambda$ and associated self-adjoint operator $H$. Further, we shall assume that $\\mu_\\Lambda<1$ (and so Hypothesis \\ref{hyp:kappa} is satisfied automatically) and it is with this assumption we show $K_H$ is H\\\"{o}lder continuous.\n\n\n\n\\begin{lemma}\\label{oneoverRintegrablelemma}\nLet $\\Lambda$ be a self-adjoint positive-homogeneous operator with real symbol $R$ and homogeneous order $\\mu_{\\Lambda}$. If $\\mu_{\\Lambda}<1$, then\n\\begin{equation*}\n\\int_{\\mathbb{V}^*}\\frac{1}{(1+R(\\xi))^{1-\\epsilon}}d\\xi<\\infty\n\\end{equation*}\nwhere $\\epsilon=(1-\\mu_{\\Lambda})\/2$. In particular, $(1+R)^{-1}\\in L^1(\\mathbb{V}^*)$.\n\\end{lemma}\n\\begin{proof}\nFor any Borel set $B$, write $m(B)=\\int_{B}d\\xi$. It suffices to prove that\n\\begin{equation*}\n\\sum_{l=0}^{\\infty}\\frac{m(F_l)}{2^l}<\\infty\n\\end{equation*}\nwhere $F_l:=\\{\\xi\\in\\mathbb{V}^*:2^l\\leq R(\\xi)^{1-\\epsilon}\\leq 2^{l+1}\\}$. To this end, fix $E\\in\\Exp(R)$ and observe that, for any $l\\geq 1$, \n\\begin{eqnarray*}\nF_l&=&\\left\\{\\xi:2^{l-1}\\leq(t^{-1} R(\\xi))^{1-\\epsilon}\\leq 2^l\\right\\}\\\\\n&=&\\left\\{\\xi:2^{l-1}\\leq R(t^{-E}\\xi)^{1-\\epsilon}\\leq 2^l\\right\\}\\\\\n&=&\\{t^E\\xi:2^{l-1}\\leq R(\\xi)^{1-\\epsilon}\\leq 2^l\\}=t^E F_{l-1}\n\\end{eqnarray*}\nwhere we have set $t=2^{1\/(1-\\epsilon)}$. Continuing inductively we see that $F_l=t^{lE}F_0$ for all $l\\in\\mathbb{N}$ and so it follows that\n\\begin{equation*}\nm(F_l)=\\int_{t^{lE}F_0}d\\xi=\\int_{F_0}\\det(t^{lE})d\\xi=(t^{l\\tr E})m(F_0)=t^{l\\mu_{\\Lambda}}m(F_0).\n\\end{equation*}\nwhere we have used the fact that $\\mu_{\\Lambda}=\\tr E^*=\\tr E$ because $E^*\\in\\Exp(\\Lambda)$. Consequently,\n\\begin{equation*}\n\\sum_{l=0}^{\\infty}2^{-l}m(F_l)=m(F_0)\\sum_{l=0}^{\\infty}2^{-l}(t^{l\\mu_{\\Lambda}})=m(F_0)\\sum_{l=0}^{\\infty}\\left(2^{-1}t^{\\mu_{\\Lambda}}\\right)^{l}<\\infty\n\\end{equation*}\nbecause $2^{-1}t^{\\mu_{\\Lambda}}=2^{(\\mu_{\\Lambda}\/(1-\\epsilon)-1)}<1$.\n\\end{proof}\n\n\\begin{lemma}\\label{holdercontlemma}\nLet $|\\cdot|$ be a norm on $\\mathbb{V}$ and suppose that $\\mu_{\\Lambda}<1$. There exists $C>0$ such that\n\\begin{equation*}\n\\int_{\\mathbb{V}*}\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{1+R(\\xi)}d\\xi\\leq C|x-y|^{(1-\\mu_{\\Lambda})}\n\\end{equation*}\nfor all $x,y\\in\\mathbb{V}$.\n\\end{lemma}\n\\begin{proof}\nLet $\\mathbf{m}\\in \\mathbb{N}_+^d$ and $\\mathbf{v}$ be that guaranteed by Proposition \\ref{prop:OperatorRepresentation} and set $E=E_{\\mathbf{v}}^{2\\mathbf{m}}\\in \\Exp(\\Lambda)$. We note that it suffices to prove the desired estimate where $|\\cdot|$ is the Euclidean norm associated the coordinate system defined by $\\mathbf{v}$. In view of the preceding lemma, \n\\begin{equation*}\n\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{(1+R(\\xi))}\\leq 4(1+R(\\xi))^{-1}\\in L^1(\\mathbb{V}^*)\n\\end{equation*} for all $x,y\\in\\mathbb{V}$. Consequently, it suffices to treat only the case in which $0<|x-y|\\leq 1$. In this case, set $t=|x-y|^{-1}$ and observe that\n\\begin{eqnarray*}\n\\int_{\\mathbb{V}^*}\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{(1+R(\\xi))}d\\xi&=&\\int_{t\\leq R(\\xi)}\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{(1+R(\\xi))}d\\xi+\\int_{t>R(\\xi)}\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{(1+R(\\xi))}d\\xi\\\\\n&\\leq&\\int_{t\\leq R(\\xi)}\\frac{4}{R(\\xi)}d\\xi+\\int_{t>R(\\xi)}|e^{i\\xi(x)}-e^{i\\xi(y)}|^2d\\xi\\\\\n&\\leq&\\int_{1\\leq R(\\xi)}\\frac{4}{R(t^{E^*}\\xi)}t^{\\mu_{\\Lambda}}d\\xi+\\int_{1>R(\\xi)}|e^{i\\xi(t^{E}x)}-e^{i\\xi(t^{E}y)}|^2t^{\\mu_{\\Lambda}}d\\xi\\\\\n&\\leq&t^{\\mu_{\\Lambda}-1}\\int_{1\\leq R(\\xi)}\\frac{4}{R(\\xi)}d\\xi+t^{\\mu_{\\Lambda}}|t^{E}(x-y)|^2\\int_{1>R(\\xi)}4|\\xi|_{*}^2d\\xi\\\\\n\\end{eqnarray*}\nwhere $|\\cdot|_{*}$ is the corresponding dual norm on $\\mathbb{V}^*$. Using Lemma \\ref{oneoverRintegrablelemma} and the fact that $|\\xi|_{*}^2$ is bounded on the bounded set $\\{1>R(\\xi)\\}$, it follows that\n\\begin{equation*}\n\\int_{\\mathbb{V}^*}\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{(1+R(\\xi))}d\\xi\\leq C\\left(t^{\\mu_{\\Lambda}-1}+t^{\\mu_{\\Lambda}}|t^{E}(x-y)|^2\\right)\n\\end{equation*}\nfor some $C>0$. Given that $\\max(\\Spec(E))\\leq 1\/2$ in view of Proposition \\ref{prop:OperatorRepresentation}, we have $|t^E(x-y)|\\leq t^{1\/2}|x-y| $ because $t\\geq 1$ and $|\\cdot|$ is the Euclidean norm associated to $\\mathbf{v}$. Consequently,\n\\begin{equation*}\n\\int_{\\mathbb{V}^*}\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{(1+R(\\xi))}d\\xi\\leq C\\left(t^{\\mu_{\\Lambda}-1}+t^{\\mu_{\\Lambda}+1}|x-y|^2\\right)= 2C|x-y|^{(1-\\mu_{\\Lambda})}.\n\\end{equation*}\n\\end{proof}\n\n\\noindent The following lemma is analogous to Lemma 14 of \\cite{Davies1995}.\n\n\n\\begin{lemma}\\label{phiexistslem}\nLet $Q$ satisfy Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} on $L^2(\\Omega)$ with associated self-adjoint operator $H$ and reference operator $\\Lambda$ and assume that $\\mu_{\\Lambda}<1$. There exists a uniformly bounded function $\\phi:\\Omega\\rightarrow L^2(\\Omega)$ such that for every $f\\in L^2(\\Omega)$,\n \\begin{equation}\\label{phiexistseq}\n \\{(H+1)^{-1\/2}f\\}(x)=\\langle f,\\phi(x)\\rangle\n \\end{equation}\nfor almost every $x\\in\\Omega$. Moreover, $\\phi$ is H\\\"{o}lder continuous of order $\\alpha=(1-\\mu_{\\Lambda})\/2$. In particular, $(H+1)^{-1\/2}$ is a bounded operator from $L^2(\\Omega)$ into $L^{\\infty}(\\Omega)$ and for each $f\\in L^2(\\Omega)$, there is a version of $(H+1)^{-1\/2}f$ which is bounded and H\\\"{o}lder continuous of order $\\alpha$.\n\\end{lemma}\n\\begin{proof}\nIn view of \\eqref{eq:Garding},\n\\begin{equation*}\n\\int_{\\mathbb{V}^*}(1+R(\\xi))|\\widehat {g_*}(\\xi)|^2d\\xi\\leq 2\\|(1+H)^{1\/2}g\\|_2^2\n\\end{equation*}\nfor all $g\\in W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$ where $R$ is the symbol of $\\Lambda$ and $g_*$ denotes the extension of $g$ to $\\mathbb{V}$ defined by \\eqref{eq:ExtensionDefinition}. Also by the Cauchy-Schwarz inequality\n\\begin{equation*}\n\\int_{\\mathbb{V}^*}(1+R(\\xi))^{\\epsilon\/2}|\\widehat{g_*}(\\xi)|d\\xi\\leq C\\left(\\int_{\\mathbb{V}^*}(1+R(\\xi))|\\widehat{g_*}(\\xi)|^2d\\xi\\right)^{1\/2}\n\\end{equation*}\nwhere\n\\begin{equation*}\nC^2=\\int_{\\mathbb{V}^*}\\frac{(1+R(\\xi))^{\\epsilon}}{(1+R(\\xi))}d\\xi<\\infty\n\\end{equation*}\nin view of Lemma \\ref{oneoverRintegrablelemma}. Consequently, for all $g\\in W_{\\mathbf{v},0}^{\\mathbf{m},2}(\\Omega)$, $\\widehat{ g_*}\\in L^1(\\mathbb{V}^*)$ and\n\\begin{equation}\\label{phiexistseq1}\n \\|g\\|_{\\infty}=\\|g_*\\|_{L^\\infty(\\mathbb{V})}\\leq\\int_{\\mathbb{V}^*}(1+R(\\xi))^{\\epsilon\/2}|\\widehat{g_*}(\\xi)|d\\xi\\leq C\\|(1+H)^{1\/2}g\\|_2.\n\\end{equation}\nSo $(H+1)^{1\/2}$ is an injective self-adjoint operator and therefore has dense range in $L^2(\\Omega)$. We can therefore consider $(H+1)^{-1\/2}$, which by \\eqref{phiexistseq1} is a bounded operator from $L^2(\\Omega)$ into $L^{\\infty}(\\Omega)$.\n\nLet $|\\cdot|$ be a norm on $\\mathbb{V}$ and for $f\\in L^2(\\Omega)$ set $g=(H+1)^{-1\/2}f$. For almost every $x,y\\in \\Omega$ we have\n\\begin{eqnarray}\\label{holderconteqg}\\nonumber\n|g(x)-g(y)|&\\leq&\\int_{\\mathbb{V}^*}|e^{i\\xi(x)}-e^{i\\xi(y)}||\\widehat{g_*}(\\xi)|d\\xi\\\\\\nonumber\n&\\leq&\\left(\\int_{\\mathbb{V}^*}(1+R(\\xi))|\\widehat{ g_*}(\\xi)|^2d\\xi\\right)^{1\/2}\\left(\\int_{\\mathbb{V}^*}\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{(1+R(\\xi))}d\\xi\\right)^{1\/2}\\\\\n&\\leq& c\\|f\\|_2\\left(\\int_{\\mathbb{V}^*}\\frac{|e^{i\\xi(x)}-e^{i\\xi(y)}|^2}{(1+R(\\xi))}d\\xi\\right)^{1\/2}\\leq C\\|f\\|_2|x-y|^{\\alpha}\n\\end{eqnarray}\nin view of the previous lemma. It follows from \\eqref{phiexistseq1} that for almost every $x\\in\\Omega$, there exists $\\phi(x)\\in L^2(\\Omega)$ such that\n\\begin{equation*}\n(H+1)^{-1\/2}f(x)=\\langle f,\\phi(x)\\rangle.\n\\end{equation*}\nBy putting $f=\\phi(x)$, another application of \\eqref{phiexistseq1} shows that $\\|\\phi(x)\\|_2\\leq C$. Moreover, \\eqref{holderconteqg} guarantees that\n\\begin{equation*}\n|(f,\\phi(x)-\\phi(y))|\\leq C\\|f\\|_2|x-y|^{\\alpha}\n\\end{equation*}\nfrom which it follows that $\\|\\phi(x)-\\phi(y)\\|_2\\leq C|x-y|^{\\alpha}$ almost everywhere. Finally, redefine $\\phi$, so that all of the above statements hold on all of $\\Omega$.\n\\end{proof}\n\n\\noindent Our final result of this section shows that the heat kernel $K_H$ can be analytically continued in its time variable to the open half-plane $\\mathbb{C}_+$ provided $\\mu_{\\Lambda}<1$.\n\n\\begin{theorem}\\label{thm:MainMeasurable}\nLet $Q$ satisfy Hypotheses \\ref{hyp:Garding} and \\ref{hyp:FormCompare} on $L^2(\\Omega)$ with associated self-adjoint operator $H$ and reference operator $\\Lambda$. Let $R$ be the symbol of $\\Lambda$ and $\\mu_{\\Lambda}$ be its homogeneous order. If $\\mu_{\\Lambda}<1$, there exists $K_H:\\mathbb{C}_+\\times\\Omega\\times\\Omega\\rightarrow\\mathbb{C}$ such that\n\\begin{equation*}\n\\left(e^{-zH}f\\right)(x)=\\int_{\\Omega}K_H(z,x,y)f(y)dy\n\\end{equation*}\nfor all $f\\in L^1(\\Omega)\\cap L^2(\\Omega)$. For fixed $z\\in\\mathbb{C}_+$, $K_H(z,\\cdot,\\cdot):\\Omega\\times\\Omega\\rightarrow \\mathbb{C}$ is H\\\"{o}lder continuous of order $\\alpha=(1-\\mu_{\\Lambda})\/2$. Moreover for each $x,y\\in\\Omega$, $\\mathbb{C}_+\\ni z\\mapsto K_H(z,x,y)$ is analytic. Finally, there exists constants $C>0$ and $M\\geq 0$ such that\n\\begin{equation*}\n|K_H(t,x,y)|\\leq \\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)\n\\end{equation*}\nfor all $x,y\\in\\Omega$ and $t>0$ where $R^{\\#}$ is the Legendre-Fenchel transform of $R$ and $C$ and $M$ are positive constants.\n\\end{theorem}\n\\begin{proof}\nThe fact that $e^{-zH}$ is a bounded holomorphic semigroup ensures that $B(z)=(1+H)e^{-zH}$ is a bounded holomorphic function on $L^2(\\Omega)$ for $z\\in\\mathbb{C}_+$. For $x,y\\in\\Omega$, $z\\in \\mathbb{C}_+$ define\n\\begin{equation*}\nK(z,x,y):=\\langle B(z)\\phi(y),\\phi(x)\\rangle \n\\end{equation*}\nwhere $\\phi$ is that given by the preceding lemma. It follows that $\\mathbb{C}_+\\ni z\\mapsto K(z,x,y)$ is analytic for any $x,y\\in\\Omega$. Now for fixed $z\\in\\mathbb{C}_+$, $K(z,\\cdot,\\cdot)$ is H\\\"{o}lder continuous of order $\\alpha$. To see this, let $|\\cdot|$ be a norm on $\\mathbb{V}$ and, with the help of Lemma \\ref{phiexistslem}, observe that for $z\\in \\mathbb{C}_+$,\n\\begin{eqnarray*}\n|K(z,x,y)-K(z,x',y')|&\\leq&|K(z,x,y)-K(z,x',y)|+|K(z,x',y)-K(z,x',y')|\\\\\n&\\leq& C\\|B(z)\\|_{2\\to 2}\\left(\\|\\phi(x)-\\phi(x')\\|_2+\\|\\phi(y)-\\phi(y')\\|_2\\right)\\\\\n&\\leq&C \\|B(z)\\|_{2\\to 2}\\left (|x-x'|^{2(\\alpha\/2)}+|y-y'|^{2(\\alpha\/2)}\\right)\\\\\n&\\leq&C\\|B(z)\\|_{2\\to 2}\\left(|x-x'|^2+|y-y'|^2\\right)^{\\alpha\/2}\n\\end{eqnarray*}\nfor all $(x,y),(x',y')\\in\\Omega\\times\\Omega$ as claimed. \n\nIt remains to show that $K(z,x,y)$ is the integral kernel of $e^{-zH}$, for then $K_H(t,\\cdot,\\cdot)=K(t,\\cdot,\\cdot)$ for $t>0$ and so the final estimate follows from Theorem \\ref{thm:Main} in view of Proposition \\ref{prop:kappa}. To this end, an appeal to Lemma \\ref{phiexistslem} shows that $(H+1)^{-1\/2}:L^2(\\Omega)\\rightarrow L^{\\infty}(\\Omega)$ is bounded and so $(H+1)^{-1\/2}:L^1(\\Omega)\\rightarrow L^2(\\Omega)$ is also bounded by duality. More is true: Using the self-adjointness of $H$ one can check that\n\\begin{equation*}\n\\phi_x(y)=\\overline{\\phi_y(x)}\n\\end{equation*}\nfor almost every $x,y\\in\\Omega$. Here, the variable of integration is that which appears in the subscript. So, for $f\\in L^1(\\Omega)\\cap L^2(\\Omega)$,\n\\begin{eqnarray*}\n\\left(e^{-Hz}f\\right)(x)&=&((H+1)^{-1\/2}B(z)(H+1)^{-1\/2}f)(x)\\\\\n&=&\\int_{\\Omega}(B(z)(H+1)^{-1\/2}f)(w)\\overline{\\phi_w(x)}dw\\\\\n&=&\\int_{\\Omega}\\langle f,\\phi(w)\\rangle\\overline{(B(z)\\phi(x)}(w)dw\\\\\n&=&\\int_{\\Omega}\\int_{\\Omega}f(y)\\overline{\\phi_y(w)}\\overline{(B(z)\\phi(x)}(w)dwdy\\\\\n&=&\\int_{\\Omega}\\int_{\\Omega}f(y)\\phi_w(y)\\overline{(B(z)\\phi(x)}(w)dwdy\\\\\n&=&\\int_{\\Omega}\\int_{\\Omega}(B(z)\\phi(y))(w)\\overline{\\phi_w(x)}dwf(y)dy\n\\end{eqnarray*}\nas desired.\n\n\\end{proof}\n\n\\section{Super-semi-elliptic operators}\n\nIn this section, we consider a class of partial differential operators to which we apply the theory of the preceding sections. We call this class of operators super-semi-elliptic operators, a term motivated by the super-elliptic operators of E. B. Davies \\cite{Davies1995} (see also \\cite{Barbatis1996,terElst1997}). Naturally, the class of super-semi-elliptic operators defined below includes the class of super-elliptic operators and our results recapture those of \\cite{Davies1995}. \\\\\n\n\\noindent Let $\\mathbf{m}=(m_1,m_2,\\dots,m_d)\\in\\mathbb{N}_+^d$, $\\mathbf{v}=\\{v_1,v_2,\\dots,v_d\\}$ be a basis of $\\mathbb{V}$ and take $E=E_{\\mathbf{v}}^{2\\mathbf{m}}\\in\\mbox{Gl}(\\mathbb{V})$ in the notation of \\eqref{eq:DefofE}. Given a non-empty open subset $\\Omega$ of $\\mathbb{V}$, consider the sesquilinear form on $L^2(\\Omega)$ given by\n\\begin{equation*}\nQ(f,g)=\\sum_{\\substack{|\\alpha:\\mathbf{m}|\\leq 1\\\\|\\beta:\\mathbf{m}|\\leq 1}}\\int_{\\Omega}a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^\\alpha f(x)\\overline{D_{\\mathbf{v}}^\\beta g(x)}\\,dx\n\\end{equation*}\nand defined initially for $f,g\\in C_0^\\infty(\\Omega)$. We shall (minimally) require the following conditions for the functions $a_{\\alpha,\\beta}$:\n\\begin{enumerate}[label=(C.\\arabic*)]\n\\item\\label{cond:meas} The collection\n\\begin{equation*}\n\\{a_{\\alpha,\\beta}(\\cdot)\\}_{\\substack{|\\alpha:\\mathbf{m}|\\leq 1\\\\|\\beta:\\mathbf{m}|\\leq 1}}\\subseteq L^{\\infty}(\\Omega)\n\\end{equation*}\nand we shall put\n\\begin{equation*}\n\\Gamma=\\max_{\\substack{|\\alpha:\\mathbf{m}|\\leq 1\\\\|\\beta:\\mathbf{m}|\\leq 1}}\\|a_{\\alpha,\\beta}\\|_{\\infty}.\n\\end{equation*}\n\\item\\label{cond:hermitian} For each $x\\in\\Omega $, the matrix\n\\begin{equation*}\n\\left\\{a_{\\alpha,\\beta}(x)\\right\\}_{\\substack{|\\alpha:\\mathbf{m}|\\leq 1\\\\|\\beta:\\mathbf{m}|\\leq 1}}\n\\end{equation*}\nis Hermitian.\n\\item\\label{cond:compare} There exists $\\{A_{\\alpha,\\beta}:|\\alpha:\\mathbf{m}|=1,|\\beta:\\mathbf{m}|=1\\}\\subseteq \\mathbb{R}$ such that\n\\begin{equation*}\n\\Lambda:=\\sum_{\\substack{|\\alpha:\\mathbf{m}|= 1\\\\|\\beta:\\mathbf{m}|= 1}}A_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha+\\beta}\n\\end{equation*}\nhas positive definite symbol $R$ (and so is a positive-homogeneous operator with $E\\in \\Exp(\\Lambda)$ and $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|$) and for some $C\\geq 1$,\n\\begin{equation*}\n\\frac{3}{4}\\sum_{\\substack{|\\alpha:\\mathbf{m}|= 1\\\\|\\beta:\\mathbf{m}|= 1}}A_{\\alpha,\\beta}\\eta_{\\alpha}\\overline{\\eta}_\\beta\\leq \\sum_{\\substack{|\\alpha:\\mathbf{m}|= 1\\\\|\\beta:\\mathbf{m}|= 1}}a_{\\alpha,\\beta}(x)\\eta_{\\alpha}\\overline{\\eta}_{\\beta}\\leq C \\sum_{\\substack{|\\alpha:\\mathbf{m}|= 1\\\\|\\beta:\\mathbf{m}|= 1}}A_{\\alpha,\\beta}\\eta_{\\alpha}\\overline{\\eta}_\\beta\n\\end{equation*}\nfor all $\\eta\\in \\oplus_{|\\alpha:\\mathbf{m}|=1}\\mathbb{C}$ and almost every $x\\in\\Omega$.\n\\end{enumerate}\nUnder the above conditions, we shall prove that the sesquilinear form $Q$ is symmetric, bounded below and therefore closable. Its closure is then associated to a self-adjoint operator $H$ on $L^2(\\Omega)$ formally given by\n\\begin{equation}\\label{eq:Hindivergenceform}\nH=\\sum_{\\substack{|\\alpha:\\mathbf{m}|\\leq 1\\\\|\\beta:\\mathbf{m}|\\leq 1}}D_{\\mathbf{v}}^\\beta\\left\\{a_{\\alpha,\\beta}(x)D_\\mathbf{v}^\\alpha\\right\\}.\n\\end{equation}\nWhen Conditions \\ref{cond:meas}, \\ref{cond:hermitian} and \\ref{cond:compare} are satisfied, the sesquilinear form $Q$ is said to be \\emph{$\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic} or simply \\emph{super-semi-elliptic}. Correspondingly, we say that the associated self-adjoint operator $H$ is \\emph{$\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic} or simply \\emph{super-semi-elliptic}. For such a sesquilinear form $Q$, we call $\\Lambda$ its associated semi-elliptic reference operator and \n\\begin{equation*}\n\\mu_{\\Lambda}=\\tr E=|\\mathbf{1}:2\\mathbf{m}|\n\\end{equation*}\nits homogeneous order. As the following proposition shows, there is a constant $C\\geq 0$ for which the sesquilinear form $Q+C$, defined by\n\\begin{equation*}\n(Q+C)(f,g)=Q(f,g)+C\\langle f,g\\rangle\n\\end{equation*}\nfor $f,g\\in \\mbox{\\rm Dom}(Q)$, satisfies Hypothesis \\ref{hyp:Garding} with positive-homogeneous reference operator $\\Lambda$.\n\n\\begin{proposition}\\label{prop:SuperSatisfiesHypothesis1}\nLet $Q$ be a $\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic form on $L^2(\\Omega)$. Then $Q$ extends to a closed and symmetric sesquilinear form on $L^2(\\Omega)$ (also denoted by $Q$) with domain\n\\begin{equation*}\n\\mbox{\\rm Dom}(Q)=\\mbox{\\rm Dom}(Q_{\\Lambda})=W_{\\mathbf{v},0}^{2,\\mathbf{m}}(\\Omega).\n\\end{equation*}\nFurther, $Q$ is bounded below by some constant $-C$ for $C\\geq 0$ and the form $Q+C$ satisfies Hypothesis \\ref{hyp:Garding} with reference operator $\\Lambda$. We denote by $H$ the self-adjoint operator associated to $Q$ (and corresponding formally with \\eqref{eq:Hindivergenceform}). If $H$ (and $Q$) consists only of principal terms, i.e.,\n\\begin{equation}\\label{eq:HDivergenceFormHomogeneous}\nH=\\sum_{\\substack{|\\alpha:\\mathbf{m}|=1\\\\ |\\beta:\\mathbf{m}|=1}}D_{\\mathbf{v}}^{\\beta}\\left\\{a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^{\\alpha}\\right\\},\n\\end{equation}\nthen $C$ can taken to be $0$ and so $Q$ satisfies Hypotheses \\ref{hyp:Garding} with reference operator $\\Lambda$.\n\\end{proposition}\n\n\\begin{proof}\nFor $f\\in C_0^{\\infty}(\\Omega)$, observe that\n\\begin{multline*}\n\\frac{3}{4}Q_{\\Lambda}(f)+\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}\\int_{\\Omega}a_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}dx\\\\\n=\\frac{3}{4}\\sum_{\\substack{|\\alpha:\\mathbf{m}|= 1\\\\|\\beta:\\mathbf{m}|= 1}}\\int_{\\Omega}A_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}dx+\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}\\int_{\\Omega}a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}dx\\\\\n\\leq \\sum_{\\substack{|\\alpha:\\mathbf{m}|\\leq 1\\\\|\\beta:\\mathbf{m}|\\leq 1}}\\int_{\\Omega}a_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}dx=Q(f)\\\\\n\\leq C\\sum_{\\substack{|\\alpha:\\mathbf{m}|= 1\\\\|\\beta:\\mathbf{m}|= 1}}\\int_{\\Omega}A_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}dx+\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}\\int_{\\Omega}a_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}dx\\\\\n\\leq C Q_{\\Lambda}(f)+\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}\\int_{\\Omega}a_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}dx.\n\\end{multline*}\nThus\n\\begin{equation}\\label{Hsatassump1propeq1}\n\\frac{3}{4}Q_{\\Lambda}(f)+L(f)\\leq Q(f)\\leq C Q_{\\Lambda}(f)+L(f)\n\\end{equation}\nwhere we have put \n\\begin{equation*}\nL(f)=\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}\\int_{\\Omega}a_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}dx.\n\\end{equation*}\nUsing uniform bound on the coefficients $a_{\\alpha,\\beta}$ and Cauchy-Schwarz inequality we see that\n\\begin{equation*}\n|L(f)|\\leq C\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}\\int_{\\Omega}|D_{\\mathbf{v}}^{\\alpha}f||D_{\\mathbf{v}}^{\\beta}f|dx\\\\\\leq C\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}\\|D_{\\mathbf{v}}^{\\alpha}f\\|_2\\|D_{\\mathbf{v}}^{\\beta}f\\|_2\n\\end{equation*}\nfor some $C>0$. For each multi-index $\\gamma$ such that $|\\gamma:\\mathbf{m}|<1$, it follows from Item \\ref{item:Scaling1} of Lemma \\ref{lem:Scaling} that\n\\begin{multline*}\n\\|D_{\\mathbf{v}}^{\\gamma}f\\|_2^2=\\int_{\\mathbb{V}^*}|\\xi^{2\\gamma}||\\widehat{f_*}(\\xi)|^2d\\xi\\leq \\int_{\\mathbb{V}^*}(\\epsilon R(\\xi)+M_{\\epsilon})|\\widehat{f_*}(\\xi)|^2d\\xi=\\epsilon Q_{\\Lambda}(f)+M_{\\epsilon}\\|f\\|_2^2\n\\end{multline*}\nwhere $\\epsilon$ can be taken arbitrarily small. Taking into account all possible multi-indices appearing in $L$, we can produce a positive constant $M$ for which\n\\begin{equation}\\label{Hsatassump1propeq2}\n|L(f)|\\leq \\frac{1}{4} Q_{\\Lambda}(f)+M\\|f\\|_2^2.\n\\end{equation}\nBy combining \\eqref{Hsatassump1propeq1} and \\eqref{Hsatassump1propeq2}, we obtain\n\\begin{eqnarray*}\n\\frac{1}{2}Q_{\\Lambda}(f)&=&\\frac{3}{4}Q_{\\Lambda}(f)-\\frac{1}{4}Q_{\\Lambda}(f)\\\\\n&\\leq & Q(f)-L(f)-\\frac{1}{4}Q_{\\Lambda}(f)\\\\\n&\\leq & Q(f)+C\\|f\\|_2^2\\\\\n&\\leq& C_1Q_{\\Lambda}(f)+C_2\\|f\\|_2^2\n\\end{eqnarray*}\nfrom which the first assertion follows immediately. In the case that $H$ consists only of its principal terms, $L$ is identically $0$ and so the remaining assertion follows from \\eqref{Hsatassump1propeq1} at once.\n\\end{proof}\n\n\n\n\n\\noindent To address Hypothesis \\ref{hyp:FormCompare} we need to first introduce an appropriate class $\\mathcal{E}$. For any integer $l\\geq \\max 2\\kappa\\mathbf{m}=\\max\\{2\\kappa m_j:j=1,2,\\dots,d\\}$, put\n\\begin{equation*}\n\\mathcal{F}_l=\\left\\{\\psi\\in C_0^{\\infty}(\\mathbb{R}): \\sup_{x\\in\\mathbb{R}}\\left|\\frac{d^j\\psi}{dx^j}(x)\\right|\\leq 1\\mbox{ for all }j=1,2,\\dots,l\\right\\}\n\\end{equation*}\nwhere $\\kappa$ is that which appears in Hypothesis \\ref{hyp:kappa}. We will take $\\mathcal{E}$ to be the set of $\\phi\\in C_{\\infty}^{\\infty}(\\mathbb{V},\\mathbb{V})$ for which there are $\\psi_1,\\psi_2,\\dots,\\psi_d\\in\\mathcal{F}_l$ such that\n\\begin{equation}\\label{definingphieq}\n(\\theta_{\\mathbf{v}}\\circ\\phi\\circ\\theta_{\\mathbf{v}}^{-1})(x_1,x_2,\\dots,x_d)=(\\psi_1(x_1),\\psi_2(x_2),\\dots,\\psi_d(x_d))\n\\end{equation}\nfor all $(x_1,x_2,\\dots,x_d)\\in\\mathbb{R}^d$.\n\n\\begin{remark}\nWhat is important for us is that the $j^{th}$-coordinate function of $\\theta_{\\mathbf{v}}\\circ\\phi\\circ\\theta_{\\mathbf{v}}^{-1}$ only depends on $x_j$ for each $j=1,2,\\dots,d$. \n\\end{remark}\n\n\\begin{remark}\nThe requirement that $l\\geq \\max 2\\kappa\\mathbf{m}$ is enough to ensure that Hypothesis \\ref{hyp:FormCompare} (and later Hypothesis \\ref{hyp:kappa}) holds uniformly for $\\phi\\in\\mathcal{E}$. This, essentially, relies on the uniform boundedness of the derivatives of $\\phi$ to sufficiently high order. In all statements to follow, we will assume without explicit mention that $l$ is sufficiently large to handle all derivatives under consideration.\n\\end{remark}\n\n\\begin{lemma}\\label{lem:twistedderivative}\nFor each multi-index $\\alpha>0$, there exists $C_{\\alpha}>0$ such that for all $f\\in \\mbox{\\rm Dom}(Q)$, $\\phi\\in\\mathcal{E}$ and $\\lambda\\in\\mathbb{V}^*$,\n\\begin{equation}\n|e^{-\\lambda(\\phi(x))}D_{\\mathbf{v}}^{\\alpha}(e^{\\lambda(\\phi)}f)(x)-D_{\\mathbf{v}}^{\\alpha}f(x)|\\leq C_{\\alpha}\\sum_{0<\\beta\\leq\\alpha}\\sum_{0<\\gamma\\leq\\beta}|\\lambda^{\\gamma}||D_{\\mathbf{v}}^{\\alpha-\\beta}f(x)|\n\\end{equation}\nfor almost every $x\\in\\mathbb{V}$.\n\\end{lemma}\n\\begin{proof}\nIn view of the coordinate charts $(\\mathbb{V},\\theta_{\\mathbf{v}})$ and $(\\mathbb{V}^*,\\theta_{\\mathbf{v}^*})$, we have\n\\begin{equation*}\n\\lambda(\\phi(x))=(\\lambda_1,\\lambda_2,\\dots,\\lambda_d)\\cdot(\\psi_1(x_1),\\psi_2(x_2),\\dots,\\psi_d(x_d))\n\\end{equation*}\nfor $x\\in\\mathbb{V}$ and $\\lambda\\in\\mathbb{V}^*$ where $\\theta_{\\mathbf{v}}(x)=(x_1,x_2,\\dots,x_d)$ and $\\theta_{\\mathbf{v}^*}(\\lambda)=(\\lambda_1,\\lambda_2,\\dots,\\lambda_d)$. So for any multi-index $\\beta>0$,\n\\begin{eqnarray*}\nD_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)})&=&\\left(i\\frac{\\partial}{\\partial x_1}\\right)^{\\beta_1}\\left(i\\frac{\\partial}{\\partial x_2}\\right)^{\\beta_2}\\cdots\\left(i\\frac{\\partial}{\\partial x_d}\\right)^{\\beta_d}\\left(e^{(\\lambda_1,\\lambda_2,\\dots,\\lambda_d)\\cdot(\\psi_1,\\psi_2,\\dots,\\psi_d)}\\right)\\\\\n&=&\\left(i^{\\beta_1}\\frac{\\partial^{\\beta_1}}{\\partial x_1^{\\beta_1}}e^{\\lambda_1\\psi_1}\\right)\\left(i^{\\beta_2}\\frac{\\partial^{\\beta_2}}{\\partial x_2^{\\beta_2}}e^{\\lambda_2\\psi_2}\\right)\\cdots\\left(i^{\\beta_d}\\frac{\\partial^{\\beta_d}}{\\partial x_d^{\\beta_d}}e^{\\lambda_d\\psi_d}\\right).\n\\end{eqnarray*}\nUsing the properties we have required for each $\\psi_j$, it follows that\n\\begin{equation*}\n|e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)})|\\leq C_{\\beta}\\prod_{\\beta_j\\neq 0} \\left(\\sum_{l=1}^{\\beta_j}|\\lambda_j^{l}|\\right)\\leq C_{\\beta}\\sum_{0<\\gamma\\leq\\beta}|\\lambda^{\\gamma}|\n\\end{equation*}\nwhere $C_{\\beta}>0$ is independent of $\\phi$ and $\\lambda$. In view of the Leibniz rule,\n\\begin{multline*}\n\\left|e^{-\\lambda(\\phi(x))}D_{\\mathbf{v}}^{\\alpha}\\left(e^{\\lambda(\\phi)}f\\right)(x)-D_{\\mathbf{v}}^{\\alpha}f(x)\\right|\\\\\n=\\left|\\sum_{0<\\beta\\leq\\alpha}C_{\\alpha,\\beta}e^{-\\lambda(\\phi(x))}D_{\\mathbf{v}}^{\\beta}\\left(e^{\\lambda(\\phi)}\\right)(x)D_{\\mathbf{v}}^{\\alpha-\\beta}f(x)\\right|\\\\\n\\leq C_{\\alpha}\\sum_{0<\\beta\\leq\\alpha}\\sum_{0<\\gamma\\leq\\beta}|\\lambda^{\\gamma}||D_{\\mathbf{v}}^{\\alpha-\\beta}f(x)|.\n\\end{multline*}\nfor almost every $x\\in\\mathbb{V}$ where $C_{\\alpha}$ is independent of $\\lambda$ and $\\phi$. The constants $C_{\\alpha,\\beta}$ appearing in the penultimate line are the standard multi-index combinations.\n\\end{proof}\n\n\n\\begin{proposition}\\label{prop:SuperSatisfiesHypothesis2}\nWith respect to the class $\\mathcal{E}$ above, $Q$ (and so $Q+C$) satisfies Hypothesis \\ref{hyp:FormCompare}. \n\\end{proposition}\n\\begin{proof}\nLet $x,y\\in\\mathbb{V}$ and set $(x_1,x_2,\\dots,x_d)=\\theta_{\\mathbf{v}}(x)$ and $(y_1,y_2,\\dots,y_d)=\\theta_{\\mathbf{v}}(y)$. For each pair $x_i,y_i\\in\\mathbb{R}$ there is $\\psi_i\\in\\mathcal{F}_l$ for which $\\psi_i(x_i)=x_i$ and $\\psi_i(y_i)=y_i$; such functions can be found by smoothly cutting off the identity while keeping derivatives bounded appropriately. Using this collection of $\\psi_i$'s, we define $\\phi$ as in \\eqref{definingphieq} and note that\n\\begin{eqnarray*}\n\\lefteqn{\\hspace{-0.75cm}\\phi(x)-\\phi(y)}\\\\\n\\hspace{.75cm}&=&\\theta_{\\mathbf{v}}^{-1}(\\psi_1(x_1),\\psi_2(x_2),\\dots,\\psi_d(x_d))-\\theta_{\\mathbf{v}}^{-1}(\\psi_1(y_1),\\psi_2(y_2),\\dots,\\psi_d(y_d))\\\\\n&=&\\theta_{\\mathbf{v}}^{-1}(x_1,x_2,\\dots,x_d)-\\theta_{\\mathbf{v}}^{-1}(y_1,y_2,\\dots,y_d)\\\\\n&=&x-y\n\\end{eqnarray*}\nas required.\n\nFor any $\\lambda\\in\\mathbb{V}^*$, $\\phi\\in\\mathcal{E}$ and $f\\in\\mbox{\\rm Dom}(Q)$,\n\\begin{equation*}\nQ_{\\lambda,\\phi}(f)=\\sum_{\\substack{|\\alpha:\\mathbf{m}|\\leq 1\\\\|\\beta:\\mathbf{m}|\\leq 1}}\\int_{\\Omega}a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^{\\alpha}(e^{-\\lambda(\\phi)}f)(x)\\overline{D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)(x)}dx.\n\\end{equation*}\nUsing the uniform boundedness of the collection $\\{a_{\\alpha,\\beta}\\}$, we have\n\\begin{multline*}\n|Q_{\\lambda,\\phi}(f)-Q(f)|\\\\\n=\\Big|\\sum_{\\substack{0<|\\alpha:\\mathbf{m}|\\leq 1\\\\0<|\\beta:\\mathbf{m}|\\leq 1}}\\int_{\\Omega}a_{\\alpha,\\beta}\\Big[e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}(e^{-\\lambda(\\phi)}f)\\overline{e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)}-D_{\\mathbf{v}}^{\\alpha}f\\overline{D_{\\mathbf{v}}^{\\beta}f}\\Big]dx\\Big|\\\\\n=\\Big|\\sum_{\\substack{0<|\\alpha:\\mathbf{m}|\\leq 1\\\\0<|\\beta:\\mathbf{m}|\\leq 1}}\\int_{\\Omega}a_{\\alpha,\\beta}\\Big[\\left(e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}(e^{-\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\alpha}f\\right)\\overline{e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)}\\\\\n+D_{\\mathbf{v}}^{\\alpha}f\\left(\\overline{e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\beta}f}\\right)\\Big]dx\\Big|\\\\\n\\leq C\\sum_{\\substack{0<|\\alpha:\\mathbf{m}|\\leq 1\\\\0<|\\beta:\\mathbf{m}|\\leq 1}}\\int_{\\Omega}|e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}(e^{-\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\alpha}f||e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)|\\\\\n+|D_{\\mathbf{v}}^{\\alpha}f||e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\beta}f|dx\\\\\n\\leq C\\sum_{\\substack{0<|\\alpha:\\mathbf{m}|\\leq 1\\\\0<|\\beta:\\mathbf{m}|\\leq 1}}\\int_{\\Omega}|e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}(e^{-\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\alpha}f||e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\beta}f|\\\\\n+|D_{\\mathbf{v}}^{\\alpha}f||e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\beta}f|dx.\n\\end{multline*\nWith the help of Lemma \\ref{lem:twistedderivative},\n\\begin{multline*}\n|Q_{\\lambda,\\phi}(f)-Q(f)|\\\\\n\\leq C\\sum_{\\substack{0<|\\alpha:\\mathbf{m}|\n\\leq 1\\\\0<|\\beta:\\mathbf{m}| \\leq 1}}\\sum_{\\substack{0<\\gamma_{\\alpha}\\leq\\alpha\\\\0<\\gamma_{\\beta}\\leq\\beta}}\\sum_{\\substack{0<\\eta_{\\alpha}\\leq\\gamma_{\\alpha}\\\\0<\\eta_{\\beta}\\leq\\gamma_{\\beta}}}\\int_{\\Omega}|\\lambda^{\\eta_{\\alpha}}||D_{\\mathbf{v}}^{\\alpha-\\gamma_{\\alpha}}f||\\lambda^{\\eta_{\\beta}}||D_{\\mathbf{v}}^{\\beta-\\gamma_{\\beta}}f|dx\\\\\n+C\\sum_{\\substack{0<|\\alpha:\\mathbf{m}|\\leq 1\\\\0<|\\beta:\\mathbf{m}|\\leq 1}}\\sum_{0<\\gamma_{\\beta}\\leq\\beta}\\sum_{0<\\eta_{\\beta}\\leq\\gamma_{\\beta}}\\int_{\\Omega}|D_{\\mathbf{v}}^{\\alpha}f||\\lambda^{\\eta_{\\beta}}||D_{\\mathbf{v}}^{\\beta-\\gamma_{\\beta}}f|dx\\\\\n\\leq C\\sum_{\\substack{0<|\\alpha:\\mathbf{m}|\\leq 1\\\\0<|\\beta:\\mathbf{m}|\\leq 1}}\\sum_{\\substack{0\\leq\\gamma_{\\alpha}\\leq\\alpha\\\\0<\\gamma_{\\beta}\\leq\\beta}}\\sum_{\\substack{0\\leq\\eta_{\\alpha}\\leq\\gamma_{\\alpha}\\\\0<\\eta_{\\beta}\\leq\\gamma_{\\beta}}}\\int_{\\Omega}|\\lambda^{\\eta_{\\alpha}}D_{\\mathbf{v}}^{\\alpha-\\gamma_{\\alpha}}f||\\lambda^{\\eta_{\\beta}}D_{\\mathbf{v}}^{\\beta-\\gamma_{\\beta}}f| dx\\\\\n\\end{multline*}\nwhere $C>0$ is independent of $\\phi,\\lambda$ and $f$. Thus by the Cauchy-Schwarz inequality,\n\\begin{equation}\\label{Hsatassumpprop2eq}\n|Q_{\\lambda,\\phi}(f)-Q(f)|\\leq C\\sum_{\\substack{0<|\\alpha:\\mathbf{m}|\\leq 1\\\\0<|\\beta:\\mathbf{m}|\\leq 1}}\\sum_{\\substack{0\\leq\\gamma_{\\alpha}\\leq\\alpha\\\\0<\\gamma_{\\beta}\\leq\\beta}}\\sum_{\\substack{0\\leq\\eta_{\\alpha}\\leq\\gamma_{\\alpha}\\\\0<\\eta_{\\beta}\\leq\\gamma_{\\beta}}}\\|\\lambda^{\\eta_{\\alpha}}D_{\\mathbf{v}}^{\\alpha-\\gamma_{\\alpha}}f\\|_2\\|\\lambda^{\\eta_{\\beta}}D_{\\mathbf{v}}^{\\beta-\\gamma_{\\beta}}f\\|_2\\\\.\n\\end{equation}\nIt is important to note that for no such summand is $|\\beta-{\\gamma_{\\beta}}:\\mathbf{m}|=1$. In view of Lemma \\ref{lem:Scaling} and Proposition \\ref{prop:SuperSatisfiesHypothesis1} it follows that for all such $\\beta$, $\\gamma_{\\beta}$ and $\\eta_{\\beta}$,\n\\begin{eqnarray*}\n\\|\\lambda^{\\eta_{\\beta}}D_{\\mathbf{v}}^{\\beta-\\gamma_{\\beta}}f\\|_2^2&=&\\int_{\\mathbb{V}^*}|\\lambda^{2\\eta_{\\beta}}\\xi^{2(\\beta-\\gamma_{\\beta})}||\\widehat{f_*}(\\xi)|^2d\\xi\\\\\n&\\leq& \\epsilon \\int_{\\mathbb{V}^*}R(\\xi)|\\widehat{f_*}(\\xi)|^2d\\xi+M_{\\epsilon}(1+R(\\lambda))\\|f\\|_2^2\\\\\n&\\leq& \\epsilon Q_{\\Lambda}(f)+M_{\\epsilon}(1+R(\\lambda))\\|f\\|_2^2\\\\\n&\\leq& \\epsilon Q(f)+M(1+R(\\lambda))\\|f\\|_2^2\n\\end{eqnarray*}\nwhere $\\epsilon$ can be taken arbitrarily small. For all admissible $\\alpha$, $\\gamma_{\\alpha}$ and $\\eta_{\\alpha}$, a similar calculation (making use of Lemma \\ref{lem:Scaling} and Proposition \\ref{prop:SuperSatisfiesHypothesis1}) shows that\n\\begin{equation*}\n\\|\\lambda^{\\eta_{\\alpha}}D_{\\mathbf{v}}^{\\alpha-\\gamma_{\\alpha}}f\\|_2^2\\leq M(Q(f)+(1+R(\\lambda))\\|f\\|_2^2)\n\\end{equation*}\nfor some $M>0$. Thus for any $\\epsilon>0$, each summand in \\eqref{Hsatassumpprop2eq} satisfies\n\\begin{eqnarray*}\n\\lefteqn{\\|\\lambda^{\\eta_{\\alpha}}D_{\\mathbf{v}}^{\\alpha-\\gamma_{\\alpha}}f\\|_2\\|\\lambda^{\\eta_{\\beta}}D_{\\mathbf{v}}^{\\beta-\\gamma_{\\beta}}f\\|_2}\\\\\n&\\leq& (M(Q(f)+(1+R(\\lambda))\\|f\\|_2^2))^{1\/2}(\\epsilon Q(f)+M(1+R(\\lambda))\\|f\\|_2^2)^{1\/2}\\\\ \n&\\leq& (\\epsilon M)^{1\/2} Q(f)+\\frac{M^{3\/2}}{\\epsilon^{1\/2}}(1+R(\\lambda))\\|f\\|_2^2.\n\\end{eqnarray*}\nThe result now follows by choosing $\\epsilon$ appropriately and combining these estimates. \n\\end{proof}\n\n\\subsection{When $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|<1$}\n\nLet $Q$ be a $\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic form on $L^2(\\Omega)$ with reference operator $\\Lambda$ (with symbol $R$ and homogeneous order $\\mu_{\\Lambda}$) and associated super-semi-elliptic operator $H$. Throughout this subsection we investigate the case in which \n\\begin{equation*}\n\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|=\\sum_{j=1}^d \\frac{1}{2m_j}<1.\n\\end{equation*}\nIn view of Propositions \\ref{prop:kappa}, \\ref{prop:SuperSatisfiesHypothesis1} and \\ref{prop:SuperSatisfiesHypothesis2}, the sesquilinear form $Q+C$ satisfies Hypotheses \\ref{hyp:Garding}, \\ref{hyp:FormCompare} and \\ref{hyp:kappa}. Upon noting that the semigroup generated by $-H$ and that generated by $-(H+C)$ are related by $e^{-t(H+C)}=e^{-tC}e^{-tH}$, the results of Section \\ref{sec:KernelRegularity} immediately give us the following proposition.\n\n\\begin{proposition}\\label{prop:Super}\nLet $Q$ be a $\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic form on $L^2(\\Omega)$ with reference operator $\\Lambda$ and associated self-adjoint super-semi-elliptic operator $H$. Let $R$ be the symbol and $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|$ be the homogeneous order of $\\Lambda$, respectively. If $\\mu_{\\Lambda}<1$, then the semigroup $T_z=e^{-zH}$ has integral kernel $K_H:\\mathbb{C}_+\\times\\Omega\\times\\Omega\\rightarrow\\mathbb{C}$ for which\n\\begin{equation*}\n\\left(e^{-zH}f\\right)(x)=\\int_\\Omega K_H(z,x,y)f(y)\\,dy\n\\end{equation*}\nfor all $f\\in L^1(\\Omega)\\cap L^2(\\Omega)$. For fixed $z$, $K_H(z,\\cdot,\\cdot)$ is jointly H\\\"{o}lder continuous of order $\\alpha=(1-\\mu_{\\Lambda})\/2$. For fixed $x,y\\in\\Omega$, $z\\mapsto K_H(z,x,y)$ is analytic on $\\mathbb{C}_+$. Finally, there are constants $C>0$ and $M\\geq 0$ for which\n\\begin{equation*}\n|K_H(t,x,y)|\\leq \\frac{C}{t^{\\mu_{\\Lambda}}}exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)\n\\end{equation*}\nfor all $x,y\\in\\Omega$ and $t>0$ where $R^{\\#}$ is the Legendre-Fenchel transform of $R$.\n\\end{proposition}\n\n\\noindent Let us now focus on the special case in which $\\Omega=\\mathbb{V}$ and the super-semi-elliptic form $Q$ (and $H$) consist only of principle terms, i.e.,\n\\begin{equation*}\nQ(f,g)=\\sum_{\\substack{|\\alpha:\\mathbf{m}|=1\\\\|\\beta:\\mathbf{m}|=1}}\\int_{\\mathbb{V}}a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^\\alpha f(x)\\overline{D_{\\mathbf{v}}^{\\beta}g(x)}\\,dx,\n\\end{equation*}\nor, equivalently, $H$ is the form \\eqref{eq:HDivergenceFormHomogeneous}. We will continue to assume that $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|<1$. In the notation of Section \\ref{sec:HHomogeneous}, we observe that\n\\begin{eqnarray*}\nQ^s(f,g)&=&s^{-1}Q(U_s f,U_s g)\\\\\n&=&s^{-1}\\sum_{\\substack{|\\alpha:\\mathbf{m}|=1\\\\|\\beta:\\mathbf{m}|=1}}\\int_{\\mathbb{V}}a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^{\\alpha}(s^{\\mu_{\\Lambda}\/2}f_s)(x)\\overline{D_{\\mathbf{v}}^{\\alpha}(s^{\\mu_{\\Lambda}\/2}g_s)(x)}\\,dx\\\\\n&=&s^{-1}s^{\\mu_{\\Lambda}}\\sum_{\\substack{|\\alpha:\\mathbf{m}|=1\\\\|\\beta:\\mathbf{m}|=1}}\\int_{\\mathbb{V}}a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^{\\alpha}(f_s)(x)\\overline{D_{\\mathbf{v}}^{\\alpha}(g_s)(x)}\\,dx\n\\end{eqnarray*} \nfor $f,g\\in \\mbox{\\rm Dom}(Q)$ where by $f_s$ (and $g_s$) is defined by $f_s(x)=f(s^Ex)$ for $s>0$ and $x\\in\\mathbb{V}$. Noting the definition of $E$ at the the beginning of the section, for each multi-index $\\gamma$ such that $|\\gamma:\\mathbf{m}|=1$,\n\\begin{equation*}\nD_{\\mathbf{v}}^{\\gamma}f_s(x)=s^{|\\gamma:2\\mathbf{m}|}(D_{\\mathbf{v}}^{\\gamma}f)(s^Ex)=s^{1\/2}(D_{\\mathbf{v}}^{\\gamma}f)(s^Ex).\n\\end{equation*}\nTherefore, by a change of variables, we obtain\n\\begin{eqnarray*}\nQ^s(f,g)&=&s^{\\mu_{\\Lambda}}\\sum_{\\substack{|\\alpha:\\mathbf{m}|=1\\\\|\\beta:\\mathbf{m}|=1}}\\int_{\\mathbb{V}}a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^{\\alpha}f(s^Ex)\\overline{D_{\\mathbf{v}}^{\\alpha}f(s^Ex)}\\,dx\\\\\n&=&\\sum_{\\substack{|\\alpha:\\mathbf{m}|=1\\\\|\\beta:\\mathbf{m}|=1}}\\int_{\\mathbb{V}}a_{\\alpha,\\beta}(s^{-E}x)D_{\\mathbf{v}}^{\\alpha}f(x)\\overline{D_{\\mathbf{v}}^{\\alpha}f(x)}\\,dx\n\\end{eqnarray*}\nfor all $f,g\\in \\mbox{\\rm Dom}(Q)$. Under our assumption that $Q$ is super-semi-elliptic, all estimates concerning $a_{\\alpha,\\beta}$ hold uniformly for $x\\in\\mathbb{V}$, and we may therefore conclude that the associated self-adjoint operator $H$ is homogeneous in the sense of in the sense of Section \\ref{sec:HHomogeneous}. Consequently, an appeal to Theorem \\ref{thm:HHomogeneous} guarantees that the heat kernel $K_H$ satisfies\n\\begin{equation*}\n|K_H(t,x,y)|\\leq \\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)\\right)\n\\end{equation*}\nfor all $t>0$ and $x,y\\in \\mathbb{V}$ where $C$ and $M$ are positive constants. \\\\\n\n\\subsection{When $\\mu_{\\Lambda}=|\\mathbf{1},2\\mathbf{m}|\\geq 1$}\n\n\\noindent In the last subsection, we deduced heat kernel estimates for $\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic operators in the case that $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|<1$; in this setting Hypothesis \\ref{hyp:kappa} was met trivially by virtue of Proposition \\ref{prop:kappa}. In general, we expect these results to also be valid in the case that $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|=1$ (by the methods of \\cite{Auscher1998} and \\cite{terElst1997}); however we do not pursue this here. As discussed in the introduction, without additional assumptions on the regularity of the coefficients, these results cannot be pushed into the realm in which $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|>1$. For an account of the relevant counterexamples which pertain to elliptic operators with measureable coefficients, we encourage the reader to see \\cite{Davies1997a,deGiorgi1968,Mazya1968}; further discussion can be found in Section 4.1 of \\cite{Davies1997}.\\\\\n\n\\noindent We here investigate the situation in which a $\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic form $Q$ has $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|$ unrestricted (allowing for $\\mu_{\\Lambda}\\geq 1$). In this situation, it is possible that $\\kappa>1$ and so Hypothesis \\ref{hyp:kappa} does not, in general, follow from Proposition \\ref{prop:kappa}. We must therefore verify the hypothesis directly. In line with the remarks of the previous paragraph, we shall make some additional (strong) assumptions concerning the regularity of the coefficients $\\{a_{\\alpha,\\beta}\\}$ under which the verification of Hypothesis \\ref{hyp:kappa} is relatively straightforward. \n\nTo this end, let $Q$ be a $\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic form on $L^2(\\mathbb{V})$ with coefficients $\\{a_{\\alpha,\\beta}\\}$. In addition to Conditions \\ref{cond:meas}, \\ref{cond:hermitian} and \\ref{cond:compare}, we ask that the following two conditions are satisfied:\n\\begin{enumerate}[label=(C.\\arabic*)]\n\\setcounter{enumi}{3}\n\\item\\label{cond:smooth} \\begin{equation*}\\{a_{\\alpha,\\beta}(\\cdot)\\}_{\\substack{|\\alpha:\\mathbf{m}|\\leq 1\\\\ |\\beta:\\mathbf{m}|\\leq 1}}\\subseteq C^{\\infty}(\\mathbb{V})\n\\end{equation*}\n\\item\\label{cond:constanttopsymbol} For each pair of multi-indices $\\alpha$ and $\\beta$ for which $|\\alpha:\\mathbf{m}|=|\\beta:\\mathbf{m}|=1$, the function $a_{\\alpha,\\beta}(\\cdot)$ is identically constant.\n\\end{enumerate}\n\n\\noindent In view of Conditions \\ref{cond:compare} and \\ref{cond:constanttopsymbol}, we may assume without loss of generality that the principal part of $Q$ is given by $\\Lambda$. In other words, we assume that $a_{\\alpha,\\beta}=A_{\\alpha,\\beta}\\in\\mathbb{R}$ for each $\\alpha$ and $\\beta$ for which $|\\alpha:\\mathbf{m}|=|\\beta:\\mathbf{m}|=1$. This allows us to write\n\\begin{equation*}\nQ(f,g)=\\langle \\Lambda f,g\\rangle+L(f,g)\n\\end{equation*}\nfor all $f,g\\in C_0^{\\infty}(\\mathbb{V})$ where\n\\begin{equation*}\n\\Lambda=\\sum_{\\substack{|\\alpha:\\mathbf{m}|=1\\\\|\\beta:\\mathbf{m}|=1}}A_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha+\\beta}\n\\end{equation*}\nand\n\\begin{equation*}\nL(f,g)=\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}\\int_{\\mathbb{V}}a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^{\\alpha}f(x)\\overline{D_{\\mathbf{v}}^{\\beta}g(x)}\\,dx.\n\\end{equation*}\nFurthermore, it is easy to see that Condition \\ref{cond:smooth} ensures that the formal expression \\eqref{eq:Hindivergenceform} makes sense. More precisely, if we define the differential operator $H_0$ by\n\\begin{equation*}\nH_0f(x)=\\Lambda f(x)+\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2}D_{\\mathbf{v}}^{\\beta}\\left\\{a_{\\alpha,\\beta}(x)D_{\\mathbf{v}}^{\\alpha} f(x)\\right\\}\n\\end{equation*}\nfor $f\\in \\mbox{\\rm Dom}(H_0):=C_0^{\\infty}(\\mathbb{V})$, then $H_0f=Hf$ whenever $f\\in C_0^{\\infty}(\\mathbb{V})$. More is true.\n\\begin{proposition}\\label{prop:ESA}\nAssume Conditions \\ref{cond:meas}-\\ref{cond:constanttopsymbol} hold. For each integer $\\kappa\\geq 1$, define the linear differential operator $H_0^{\\kappa}$ by\n\\begin{equation*}\nH_0^{\\kappa}f=(H_0)^{\\kappa} f\n\\end{equation*}\nwith domain $\\mbox{\\rm Dom}(H_0^{\\kappa})=C_0^\\infty(\\mathbb{V})$. Then the following properties hold:\n\\begin{enumerate}\n\\item There are smooth functions $b_{\\alpha,\\beta}=\\overline{b_{\\beta,\\alpha}}$ for $|\\alpha+\\beta:\\mathbf{m}|<2\\kappa$ and real constants $B_{\\alpha,\\beta}=B_{\\beta,\\alpha}$ for $|\\alpha:\\mathbf{m}|=|\\beta:\\mathbf{m}|=\\kappa$ for which\n\\begin{eqnarray}\\label{eq:HkappaFormal}\\nonumber\nH_0^{\\kappa}f&=&\\Lambda^{\\kappa}f+\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2\\kappa}D_{\\mathbf{v}}^{\\beta}\\left\\{b_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\right\\}\\\\\n&=&\\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}D_{\\mathbf{v}}^{\\beta}\\left\\{B_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\right\\}+\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2\\kappa}D_{\\mathbf{v}}^{\\beta}\\left\\{b_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}f\\right\\}\n\\end{eqnarray}\nfor $f\\in C_0^{\\infty}(\\mathbb{V})$.\n\\item $H_0^{\\kappa}$ with initial domain $Dom(H_0^{\\kappa})=C_0^{\\infty}(\\mathbb{V})$ is essentially self-adjoint, its closure is precisely the self-adjoint operator $H^{\\kappa}$ (defined as the $\\kappa$th power of $H$) and\n\\begin{equation*}\n\\mbox{\\rm Dom}(H^{\\kappa})=W_{\\mathbf{v}}^{\\mathbf{2\\kappa \\mathbf{m}},2}(\\mathbb{V}).\n\\end{equation*}\n\\end{enumerate}\n\\end{proposition}\n\\begin{proof}\nThe first statement follows by direct calculation (Leibniz's rule) keeping in mind that $a_{\\alpha,\\beta}(x)$ are bounded smooth functions forming a Hermitian matrix at each $x\\in\\mathbb{V}$. Using integration by parts and the definition of $Q$, for $g\\in \\mbox{\\rm Dom}(H_0^k)$, we find that\n\\begin{equation*}\n\\langle H^{\\kappa} f,g\\rangle=Q(H^{\\kappa-1}f,g)=\\langle H^{\\kappa-1}f,H_0 g\\rangle=\\cdots =\\langle f,H_0^k g\\rangle\n\\end{equation*}\nfor all $f\\in \\mbox{\\rm Dom}(H^{\\kappa})$. In view of the self-adjointness of $H^{\\kappa}$, this calculation guarantees that $g\\in \\mbox{\\rm Dom}((H^{\\kappa})^*)=\\mbox{\\rm Dom}(H^{\\kappa})$ and $H^{\\kappa}g=H_0^{\\kappa}g$ and therefore $H^{\\kappa}$ is a self-adjoint extension of the symmetric operator $H_0^{\\kappa}$. It remains to show that this operator is essentially self adjoint and characterize the domain of $H^{\\kappa}$. \n\nIn view of the first statement, we write\n\\begin{equation*}\nH_0^{\\kappa}=\\Lambda^{\\kappa}+\\Psi\n\\end{equation*}\nwhere $\\Psi$ is the symmetric operator\n\\begin{equation*}\n\\Psi=\\sum_{|\\alpha+\\beta:\\kappa\\mathbf{m}|<2}D_{\\mathbf{v}}^{\\beta}\\left\\{b_{\\alpha,\\beta}D_{\\mathbf{v}}^{\\alpha}\\right\\}\n\\end{equation*}\nwith domain $\\mbox{\\rm Dom}(\\Psi)=\\mbox{\\rm Dom}(H_0^{\\kappa})$. It is straightforward to see that $\\Lambda^{\\kappa}$ is a positive-homogeneous operator with symbol $R(\\xi)^{\\kappa}$. Further, observe that $E':=E_{\\mathbf{v}}^{2\\kappa\\mathbf{m}}\\in \\Exp(\\Lambda^{\\kappa})$ and so the homogeneous order of $\\Lambda^{\\kappa}$ is $\\mu_{\\Lambda^{\\kappa}}=\\tr E'=\\mu_{\\Lambda}\/\\kappa$. An appeal to Proposition \\ref{prop:esa} guarantees that $\\Lambda^\\kappa$, with initial domain $C_0^{\\infty}(\\mathbb{V})$, is essentially self-adjoint and the domain of its closure is $W_{\\mathbf{v}}^{2\\kappa\\mathbf{m},2}(\\mathbb{V})$.\n\nBy analogous arguments to those given in the proof of Proposition \\ref{prop:SuperSatisfiesHypothesis1} using the fact that $|\\alpha+\\beta:\\kappa\\mathbf{m}|<2$ for all multi-indices appearing in $\\Psi$, by virtue of Lemma \\ref{lem:Scaling} we find that for any $\\epsilon>0$, there is $M_{\\epsilon}\\geq 1$ for which\n\\begin{equation*}\n\\|\\Psi f\\|_2\\leq \\epsilon \\|\\Lambda^{\\kappa} f\\|_2+M_{\\epsilon}\\|f\\|_2\n\\end{equation*}\nfor all $f\\in C_0^{\\infty}(\\mathbb{V})$. In view of this estimate, an appeal to Lemma 7.4 of \\cite{Schechter1986} ensures that $H_0^{\\kappa}=\\Lambda^{\\kappa}+\\Psi$ is essentially self-adjoint and its closure has domain $W_{\\mathbf{v}}^{2\\kappa\\mathbf{m},2}(\\mathbb{V})$. Upon noting that $H^{\\kappa}$ is a self-adjoint extension of $H_0^{\\kappa}$, it is therefore the unique self-adjoint extension and we may conclude at once that\n\\begin{equation*}\n\\mbox{\\rm Dom}(H^{\\kappa})=W_{\\mathbf{v}}^{2\\kappa\\mathbf{m},2}(\\mathbb{V}).\n\\end{equation*}\n\\end{proof}\n\n\\noindent The following lemma contains the essential estimate needed to verify Hypothesis \\ref{hyp:kappa} for a super-semi-elliptic operator whose coefficients satisfy Conditions \\ref{cond:meas}-\\ref{cond:constanttopsymbol}.\n\n\n\\begin{lemma}\nAssume Conditions \\ref{cond:meas}-\\ref{cond:constanttopsymbol} hold and let $\\kappa\\geq 1$ be an integer. Then, for any $\\epsilon>0$, there is a constant $M_{\\epsilon}\\geq 1$ for which\n\\begin{equation*}\n|\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle-Q_{\\Lambda^{\\kappa}}(f)|\\leq \\epsilon Q_{\\Lambda^{\\kappa}}(f)+M_{\\epsilon}(1+R(\\lambda))^{\\kappa}\\|f\\|_2^2\n\\end{equation*}\nfor all $\\lambda\\in\\mathbb{V}^*$, $\\phi\\in\\mathcal{E}$ and $f\\in C_0^{\\infty}(\\mathbb{V})$.\n\\end{lemma}\n\\begin{proof}\nIt follows from the previous proposition that, for $\\lambda\\in\\mathbb{V}^*$, $\\phi\\in \\mathcal{E}$ and $f\\in C_0^{\\infty}(\\mathbb{V})$,\n\\begin{equation*}\nH_{\\lambda,\\phi}^{\\kappa} f=(H_0^{\\kappa})_{\\lambda,\\phi}f=e^{\\lambda(\\phi)}H_0^{\\kappa}(e^{-\\lambda(\\phi)}f)\n\\end{equation*}\nwhere $H_0^{\\kappa}f$ is given by \\eqref{eq:HkappaFormal}. With this in mind, integration by parts gives\n\\begin{equation*}\n\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle-Q_{\\Lambda^{\\kappa}}(f)=U(\\lambda,\\phi,f)+W(\\lambda,\\phi,f)\n\\end{equation*}\nwhere\n\\begin{eqnarray*}\nU(\\lambda,\\phi,f)&=&\\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}B_{\\alpha,\\beta}\\int_{\\mathbb{V}}\\Big[\\left(e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}\\left(e^{-\\lambda(\\phi)}f\\right)\\right)\\overline{\\left(e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}\\left(e^{\\lambda(\\phi)}f\\right)\\right)}\\\\\n&&\\hspace{7cm}-D_{\\mathbf{v}}^\\alpha f\\overline{D_{\\mathbf{v}}^{\\beta} f}\\,\\Big]\\,dx\n\\end{eqnarray*}\nand\n\\begin{equation*}\nW(\\lambda,\\phi,f)=\\sum_{|\\alpha+\\beta:\\mathbf{m}|<2\\kappa}\\int_{\\mathbb{V}}b_{\\alpha,\\beta}\\left(e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}\\left(e^{-\\lambda(\\phi)}f\\right)\\right)\\overline{\\left(e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}\\left(e^{\\lambda(\\phi)}f\\right)\\right)}\\,dx\n\\end{equation*}\nfor $\\lambda\\in\\mathbb{V}^*$, $\\phi\\in\\mathcal{E}$ and $f\\in C_0^{\\infty}(\\mathbb{V})$. Just as we did in the proof to Proposition \\ref{prop:SuperSatisfiesHypothesis2}, we write\n\\begin{multline*}\n|U(\\lambda,\\phi,f)|\\\\\n=\\Big|\\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}B_{\\alpha,\\beta}\\int_{\\mathbb{V}}\\Big[\\left(e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}(e^{-\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\alpha}f\\right)\\overline{e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)}\\\\\n+D_{\\mathbf{v}}^{\\alpha}f\\left(\\overline{e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\beta}f}\\right)\\Big]dx\\Big|\\\\\n\\leq C\\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}\\int_{\\mathbb{V}}|e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}(e^{-\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\alpha}f||e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)|\\\\\n+|D_{\\mathbf{v}}^{\\alpha}f||e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\beta}f|dx\\\\\n\\leq C\\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}\\int_{\\mathbb{V}}|e^{\\lambda(\\phi)}D_{\\mathbf{v}}^{\\alpha}(e^{-\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\alpha}f||e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\beta}f|\\\\\n+|D_{\\mathbf{v}}^{\\alpha}f||e^{-\\lambda(\\phi)}D_{\\mathbf{v}}^{\\beta}(e^{\\lambda(\\phi)}f)-D_{\\mathbf{v}}^{\\beta}f|dx.\n\\end{multline*}\nwhere $C$ is independent of $\\lambda$, $\\phi$ and $f$. With the help of Lemma \\ref{lem:twistedderivative} and the Cauchy-Schwarz inequality, we have\n\\begin{multline*}\n|U(,\\lambda, \\phi,f)|\\leq C \\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}\\sum_{\\substack{0<\\rho_{\\alpha}\\leq \\alpha\\\\0<\\rho_{\\beta}\\leq \\beta}} \\sum_{\\substack{0<\\gamma_\\alpha\\leq \\rho_{\\alpha}\\\\0<\\gamma_{\\beta}\\leq \\rho_{\\beta}}} \\int_{\\mathbb{V}}|\\lambda^{\\gamma_{\\alpha}}||D_{\\mathbf{v}}^{\\alpha-\\rho_{\\alpha}}f||\\lambda^{\\gamma_{\\beta}}||D_{\\mathbf{v}}^{\\beta-\\rho_{\\beta}}f|\\,dx\\\\\n+C\\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}\\sum_{0<\\rho_{\\alpha}\\leq \\alpha} \\sum_{0<\\gamma_{\\alpha}\\leq \\rho_{\\alpha}} \\int_{\\mathbb{V}}|\\lambda^{\\gamma_{\\alpha}}||D_{\\mathbf{v}}^{\\alpha-\\rho_{\\alpha}}f||D_{\\mathbf{v}}^{\\beta}|\\,dx\\\\\n\\leq C\\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}\\sum_{\\substack{0<\\rho_{\\alpha}\\leq \\alpha\\\\0\\leq \\rho_{\\beta}\\leq \\beta}} \\sum_{\\substack{0<\\gamma_\\alpha\\leq \\rho_{\\alpha}\\\\0\\leq\\gamma_{\\beta}\\leq \\rho_{\\beta}}} \\int_{\\mathbb{V}}|\\lambda^{\\gamma_{\\alpha}}||D_{\\mathbf{v}}^{\\alpha-\\rho_{\\alpha}}f||\\lambda^{\\gamma_{\\beta}}||D_{\\mathbf{v}}^{\\beta-\\rho_{\\beta}}f|\\,dx\\\\\n\\leq C\\sum_{\\substack{|\\alpha:\\mathbf{m}|=\\kappa\\\\|\\beta:\\mathbf{m}|=\\kappa}}\\sum_{\\substack{0<\\rho_{\\alpha}\\leq \\alpha\\\\0\\leq \\rho_{\\beta}\\leq \\beta}} \\sum_{\\substack{0<\\gamma_\\alpha\\leq \\rho_{\\alpha}\\\\0\\leq\\gamma_{\\beta}\\leq \\rho_{\\beta}}}\\|\\lambda^{\\gamma_{\\alpha}}D_{\\mathbf{v}}^{\\alpha-\\rho_{\\alpha}}f\\|_2\\|\\lambda^{\\gamma_{\\beta}}D_{\\mathbf{v}}^{\\beta-\\rho_{\\beta}}f\\|_2\n\\end{multline*}\nwhere, again, $C$ is independent of $\\lambda$, $\\phi$ and $f$. For each $\\alpha$, $\\rho_{\\alpha}$ and $\\gamma_{\\alpha}$ such that $|\\alpha:\\mathbf{m}|=\\kappa$, $0<\\rho_{\\alpha}\\leq\\alpha$ and $0<\\gamma_{\\alpha}\\leq\\rho_\\alpha$, properties of Fourier transform and Lemma \\ref{lem:Scaling} guarantee that, for any $\\epsilon>0$ there is $M_{\\epsilon}\\geq 1$ for which\n\\begin{multline*}\n\\|\\lambda^{\\gamma_{\\alpha}}D_{\\mathbf{v}}^{\\alpha-\\rho_{\\alpha}}f\\|_2^2=\\|\\lambda^{\\gamma_{\\alpha}}\\xi^{\\alpha-\\rho_{\\alpha}}\\hat{f}\\|_{2^*}^2=\\int_{\\mathbb{V}^*}|\\lambda^{2\\gamma_{\\alpha}}\\xi^{2\\alpha-2\\rho_{\\alpha}}||\\hat{f}(\\xi)|^2\\,d\\xi\\\\\n\\leq \\int_{\\mathbb{V}^*}\\Big(\\epsilon R(\\xi)^{\\kappa}+M_\\epsilon(R(\\lambda)+1)^{\\kappa}\\Big)|\\hat{f}(\\xi)|^2\\,d\\xi\\\\\n\\leq \\epsilon Q_{\\Lambda^{\\kappa}}(f)+M_{\\epsilon}(R(\\lambda)+1)^{\\kappa}\\|f\\|_2^2.\n\\end{multline*}\nSimilarly, for each $\\beta,\\rho_{\\beta}$ and $\\gamma_{\\beta}$ such that $|\\beta:\\mathbf{m}|=\\kappa$, $0\\leq \\rho_{\\beta}\\leq\\beta$ and $0\\leq \\gamma_\\beta\\leq \\rho_{\\beta}$, there is a constant $M$ for which\n\\begin{equation*}\n\\|\\lambda^{\\gamma_{\\alpha}}D_{\\mathbf{v}}^{\\alpha-\\rho_{\\alpha}}f\\|_2^2\\leq M\\left(Q_{\\Lambda^{\\kappa}}(f)+(1+R(\\lambda))^{\\kappa}\\|f\\|_2^2\\right)\n\\end{equation*}\nFrom these estimates it follows that, for any $\\epsilon>0$, there is $M_{\\epsilon}\\geq 1$ for which\n\\begin{equation*}\n|U(\\lambda,\\phi,f)|\\leq \\epsilon Q_{\\Lambda^{\\kappa}}(f)+M_{\\epsilon}(1+R(\\lambda))^{\\kappa}\\|f\\|_2^2\n\\end{equation*}\nfor all $\\lambda\\in\\mathbb{V}^*$, $\\phi\\in\\mathcal{E}$ and $f\\in C_0^{\\infty}(\\mathbb{V})$. By a similar argument, making use of Lemma \\ref{lem:Scaling} and the fact that $W(\\lambda,\\phi,f)$ consists of ``lower order'' terms whose coefficients $b_{\\alpha,\\beta}$ are everywhere bounded, an analogous estimate can be made for $W(\\lambda,\\phi,f)$. From these estimates the lemma follows at once.\n\\end{proof}\n\\begin{proposition}\\label{prop:SuperDuperSatisfiesHypothesis3}\nAssume that Conditions \\ref{cond:meas}-\\ref{cond:constanttopsymbol} hold. Then $Q$ (and so $Q+C$) satisfies Hypothesis \\ref{hyp:kappa}.\n\\end{proposition}\n\\begin{proof}\nBy virtue of Proposition \\ref{prop:ESA} and Leibniz's rule, we see that\n\\begin{multline*}\n\\mbox{\\rm Dom}(H^{\\kappa}_{\\lambda,\\phi})=\\{f\\in L^2:e^{-\\lambda(\\phi)}f\\in \\mbox{\\rm Dom}(H^{\\kappa})\\}\\\\\n=\\{f\\in L^2:e^{-\\lambda(\\phi)}f\\in W_{\\mathbf{v}}^{2\\kappa\\mathbf{m},2}(\\mathbb{V})\\}=W_{\\mathbf{v}}^{2\\kappa\\mathbf{m},2}(\\mathbb{V})\n\\end{multline*}\nfor all $\\lambda\\in\\mathbb{V}^*$ and $\\phi\\in\\mathcal{E}$ where we fix $\\kappa=\\min\\{n:\\mu_{\\Lambda}\/n<1\\}$. Consequently,\n\\begin{equation*}\n\\mbox{\\rm Dom}(H^{\\kappa}_{\\lambda,\\phi})=W_{\\mathbf{v}}^{2\\kappa\\mathbf{m},2}(\\mathbb{V})\\subseteq W_{\\mathbf{v}}^{\\kappa\\mathbf{m},2}(\\mathbb{V})=\\mbox{\\rm Dom}(Q_{\\Lambda^{\\kappa}})\n\\end{equation*}\nfor all $\\lambda\\in\\mathbb{V}^*$ and $\\phi\\in\\mathcal{E}$. An appeal to the preceding lemma guarantees that, for any $\\epsilon>0$, there is $M_{\\epsilon}\\geq 1$ for which\n\\begin{eqnarray*}\nQ_{\\Lambda^{\\kappa}}(f)&=&\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle+Q_{\\Lambda^{\\kappa}}(f)-\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle\\\\\n&\\leq&|\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle|+|Q_{\\Lambda^{\\kappa}}(f)-\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle|\\\\\n&\\leq &M_\\epsilon \\left|\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle\\right|+\\epsilon Q_{\\Lambda^{\\kappa}}(f)+M_{\\epsilon}(1+R(\\lambda))^{\\kappa}\\|f\\|_2^2\n\\end{eqnarray*}\nfor $\\lambda\\in\\mathbb{V}^*$, $\\phi\\in\\mathcal{E}$ and $f\\in C_0^{\\infty}(\\mathbb{V})$. Equivalently,\n\\begin{equation*}\nQ_{\\Lambda^{\\kappa}}(f)\\leq\\frac{M_\\epsilon}{1-\\epsilon}\\left|\\langle H_{\\lambda,\\phi}^{\\kappa}f,f\\rangle\\right|+\\frac{M_{\\epsilon}}{1-\\epsilon}(1+R(\\lambda))^\\kappa\\|f\\|_2^2\n\\end{equation*}\nfor all $\\lambda\\in\\mathbb{V}^*$, $\\phi\\in\\mathcal{E}$ and $f\\in C_0^{\\infty}(\\mathbb{V})$. In view of Propositions \\ref{prop:DirichletOperator} and \\ref{prop:ESA}, $C_0^{\\infty}(\\mathbb{V})$ is a core for both $Q_{\\Lambda^{\\kappa}}$ and $H^{\\kappa}$ and so it follows that the above estimate holds for all $\\lambda\\in\\mathbb{V}^*$, $\\phi\\in\\mathcal{E}$ and $f\\in \\mbox{\\rm Dom}(H^{\\kappa})=\\mbox{\\rm Dom}(H^{\\kappa}_{\\lambda,\\phi})=W_{\\mathbf{v}}^{2\\kappa\\mathbf{m},2}(\\mathbb{V})$, as desired.\n\\end{proof}\n\n\\noindent In view of Propositions \\ref{prop:SuperSatisfiesHypothesis1}, \\ref{prop:SuperSatisfiesHypothesis2} and \\ref{prop:SuperDuperSatisfiesHypothesis3}, an appeal to Theorem \\ref{thm:Main} gives our final result for super-semi-elliptic-operators.\n\n\\begin{proposition}\nLet $Q$ be a $\\{2\\mathbf{m},\\mathbf{v}\\}$-super-semi-elliptic form on $L^2(\\mathbb{V})$ whose coefficients satisfy Conditions \\ref{cond:meas}-\\ref{cond:constanttopsymbol} with reference operator $\\Lambda$ and associated self-adjoint super-semi-elliptic operator $H$. Let $R$ be the symbol and $\\mu_{\\Lambda}=|\\mathbf{1}:2\\mathbf{m}|$ be the homogeneous order of $\\Lambda$, respectively. Then the semigroup $T_t=e^{-tH}$ has integral kernel $K_H:(0,\\infty)\\times\\mathbb{V}\\times\\mathbb{V}\\to\\mathbb{C}$ satisfying\n\\begin{equation*}\n|K_H(t,x,y)|\\leq \\frac{C}{t^{\\mu_{\\Lambda}}}\\exp\\left(-tMR^{\\#}\\left(\\frac{x-y}{t}\\right)+Mt\\right)\n\\end{equation*}\nfor all $x,y\\in\\mathbb{V}$ and $t>0$ where $R^{\\#}$ is the Legendre-Fenchel transform of $R$ and $C$ and $M$ are positive constants.\n\\end{proposition}\n\n\\begin{remark}\nThe above result is weaker than Theorem 5.1 of \\cite{Randles2017} in that the latter treats semi-elliptic operators with H\\\"{o}lder continuous coefficients and allows for the operator's principal part to have variable coefficients. We have included this result because its proof is drastically different from that of Theorem 5.1 of \\cite{Randles2017} and relies on the functional-analytic method of E. B. Davies \\cite{Davies1995}, as we have adapted and presented in this article. It also illustrates that Davies' method can be extended into the realm in which $\\mu_{\\Lambda}\\geq 1$ ( or $d\\geq 2m$ for elliptic operators). As discussed in the following two remarks, we believe this result can be sharpened still while making use of our general theory presented in Theorem \\ref{thm:Main}.\n\\end{remark}\n\\begin{remark} Condition \\ref{cond:smooth}, a strong assumption, was used to establish that the powers of $H$ were sufficiently well behaved under perturbations thus establishing Proposition \\ref{prop:SuperDuperSatisfiesHypothesis3}. It remains an open question as to what is the weakest smoothness assumption that can be made on the coefficients of $H$ to verify Hypothesis \\ref{hyp:kappa}.\n\\end{remark}\n\\begin{remark} In checking the perturbative estimates in the proof of Proposition \\ref{prop:SuperDuperSatisfiesHypothesis3}, it was useful to have $C_0^{\\infty}(\\mathbb{V})$ as a core for $\\mbox{\\rm Dom}(H^{\\kappa})$. Under our assumptions, this fact relied on the formal expression for the $\\kappa$th power of $H$, $H_0^{\\kappa}$, to be essentially self-adjoint with closure $H^\\kappa$. We ask: To what degree is this necessary?\n\\end{remark} \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWe consider a semilinear fractional-order Rayleigh-Stokes problem for a generalized second-grade fluid. Let $\\Omega\\subset \\mathbb{R}^d\\, (d=1,2,3)$ be a bounded convex polygonal domain with its boundary $\\partial \\Omega$, and $T>0$. The mathematical model is given by\n\\begin{subequations}\\label{main}\n\\begin{alignat}{2}\\label{a1}\n& \\partial_t u(x,t) -(1+\\gamma \\partial_t^{\\alpha})\\Delta u(x,t)=f(u) &&\\quad\\mbox{ in }\\Omega\\times (0,T],\n\\\\ \\label{a2}\n&u(x,t)= 0 &&\\quad\\mbox{ on }\\partial\\Omega\\times (0,T],\n\\\\ \\label{a3}\n&u(x,0)=u_0(x) &&\\quad\\mbox{ in }\\Omega,\n\\end{alignat}\n\\end{subequations}\nwhere $\\gamma>0$ is a fixed constant, $u_0$ is a given initial data, $\\partial_t=\\partial \/\\partial t$ and \n$\\partial_t^{\\alpha}$ is the Riemann-Liouville fractional derivative in time \\textcolor{black}{with} $\\alpha\\in(0,1)$ defined by\n\\begin{equation} \\label{Ba}\n\\partial_t^{\\alpha} \\varphi(t)=\\frac{d}{dt}\\int_0^t\\omega_{1-\\alpha}(t-s)\\varphi(s)\\,ds\\quad\\text{with} \\quad\n\\omega_{\\alpha}(t):=\\frac{t^{\\alpha-1}}{\\Gamma(\\alpha)}.\n\\end{equation}\nIn \\eqref{a1}, $f:\\mathbb{R}\\to\\mathbb{R}$ is a smooth function satisfying the Lipschitz condition \n\\begin{equation} \\label{Lip}\n|f(t)-f(s)|\\leq L|t-s|\\quad \\forall t,s\\in \\mathbb{R},\n\\end{equation} \nfor some constant $L>0$. \n\n\n\nThe aim of this work is to study some aspects of the numerical solution of the semilinear problem \\eqref{main}. The linear case has been considered by several authors.\nFor instance, in \\cite{stokes1} and \\cite{stokes2}, implicit and explicit finite difference schemes have been proposed. A Fourier analysis was employed to investigate stability and convergence. In \\cite{23}, a numerical scheme was derived and analyzed by transforming the problem into an integral equation. In \\cite{stokes4}, a numerical scheme was investigated \nusing the reproducing kernel technique. \nIn \\cite{Zaky}, Zaky applied the Legendre-tau method to problem \\eqref{main} and discussed related convergence rates.\n The convergence analysis in all these studies assumes that the exact solution \nis sufficiently regular, including at $t=0$, which is not practically the case. \nIn \\cite{EJLZ2016}, Jin et al. investigated a piecewise linear finite element method (FEM) in space and a convolution quadrature in time, and obtained optimal error estimates with respect to the solution smoothness, expressed through the initial data $u_0$. Most recently, a similar analysis was presented in \\cite{MK-2018} for a time-fractional Oldroyd-B fluid problem.\n\nThe numerical approximation of nonlinear time-fractional models has recently attracted the attention of many researchers. In particular, the time-fractional subdiffusion model\n\\begin{equation}\\label{uu}\n^C\\partial_t^{\\alpha} u(x,t) -\\Delta u(x,t)=f(u)\n\\end{equation}\nhas \\textcolor{black}{been} given a special attention. Here, $^C\\partial_t^{\\alpha}$ denotes the Caputo fractional derivative in time of order $\\alpha$. \n In \\cite{LWZ-2017}, for instance, a linearized $L^1$-Galerkin FEM was proposed for solving a nonlinear time-fractional Schr\\\"odinger equation. Based on a temporal-spatial error splitting argument and a new discrete fractional Gronwall-type inequality, optimal error estimates of the numerical schemes are obtained without \nrestrictions on the time step size. In \\cite{LLSWZ-2018}, $L^1$-type schemes have been analyzed for approximating the solution of \\eqref{uu}, and related error estimates have been derived. The estimates are obtained under high regularity assumptions on the exact solution. \n In \\cite{JLZ-2018}, the numerical solution of \\eqref{uu} was investigated under the assumption that the nonlinear function $f$ is globally Lipschitz continuous and the initial data $u_0\\in H^2(\\Omega)\\cap H^1_0(\\Omega)$. These results have been extended in \\cite{MK-2019} to problems with nonsmooth initial data. Recently, a numerical study with a more general condition on nonlinearity was presented in \\cite{MK-2020}.\n \n\nIn this paper, we first investigate a lumped mass FE semidiscrete scheme in space for solving \\eqref{main}. \\textcolor{black}{ Compared with the standard piecewise linear FEM \\cite{MK-2019,EJLZ2016},\n the lumped mass FEM has the advantage that \nwhen representing the discrete solution in the nodal basis functions, it produces a diagonal mass matrix which enhances the computation procedure.} \n Our aim is to derive optimal error estimates for solutions with smooth and nonsmooth initial data. The analysis will be based on a semi-group type approach.\nThe FE solution will serve as an intermediate solution to establish error estimates for\n the lumped mass FEM. This technique was used for instance in \\cite{CLT-2012,CLT-2013} and \n \\cite{MK-2018-b}.\n\nOur second objective is to investigate a time-stepping scheme using a first-order convolution quadrature in time. Pointwise-in-time optimal error estimates \nare then derived. The main technical tool relies on the use of the discrete propagator (discrete evolution operator) associated with the numerical method, see \\cite{Lubich-2006}.\n\nThe paper is organized as follows. In Section 2, we represent the solution of \\eqref{main} in an integral form and obtain regularity results. In Section 3, we derive error estimates for the standard Galerkin FEM. A convolution quadrature time discretization method is analyzed in Section 4, and related error estimates are established. In Section 5, we investigate a fully discrete scheme obtained by the lumped mass FEM combined with the convolution quadrature in time. Finally, we provide some numerical examples to confirm our theoretical results.\n\nThroughout the paper, $c$ denotes a generic constant which may change at each occurrence but it is always independent of discretization parameters; mesh size $h$ and time step size $\\tau$. We shall also use the notation $u'$ denoting $\\partial u\/\\partial t$.\n\\section{Continuous problem} \\label{sec:notation}\n\\setcounter{equation}{0}\nThis section is devoted to the analysis of the continuous problem \\eqref{main}. Based on an integral representation of its solution, we prove regularity results, which will play a key role in the error analysis. We begin by introducing some notations.\n For $r\\geq 0$, we denote by $\\dot H^r(\\Omega)\\subset L^2(\\Omega)$ the Hilbert space induced by the norm \n$ \\|v\\|_{\\dot H^r(\\Omega)}^2=\\sum_{j=1}^\\infty \\lambda_j^r (v,\\phi_j)^2$, where $\\{(\\lambda_j,\\phi_j)\\}_{j=1}^\\infty$ are the Dirichlet eigenpairs of $A:=-\\Delta$ on $\\Omega$ with $\\{\\phi_j\\}_{j=1}^\\infty$ being an orthonormal basis in $L^2(\\Omega)$. Thus, \n $\\|v\\|_{\\dot H^0(\\Omega)}=\\|v\\|$ is the norm in $L^2(\\Omega)$, \n$\\|v\\|_{\\dot H^1(\\Omega)}$ is the norm in $H_0^1(\\Omega)$, and $\\|v\\|_{\\dot H^2(\\Omega)}=\\|A v\\|$ is the equivalent norm in $H^2(\\Omega)\\cap H^1_0(\\Omega)$ \\cite{thomee1997}.\n\n\n\nFor a given $\\theta\\in (\\pi\/2,\\pi)$, we define the sector $\n\\Sigma_{\\theta}=\\{z\\in \\mathbb{C}, \\,z\\neq 0,\\, |\\arg z|< \\theta\\}\n$. Since $A$ is selfadjoint and positive definite, the operator\n$(z^\\alpha I+A)^{-1}:L^2(\\Omega)\\to L^2(\\Omega)$ satisfies the bound \n\\begin{equation}\\label{res1}\n\\|(z^\\alpha I+A)^{-1}\\|\\leq M |z|^{-\\alpha} \\quad \\forall z\\in \\Sigma_\\theta,\n\\end{equation}\nwhere $M$ depends on $\\theta$.\n\nLet $\\hat{u}(x,z)$ denote the the Laplace transform of $u(x,t)$. Set $w(t)=f(u(t))$. Then, by taking Laplace transforms in \\eqref{a1}, we obtain\n$$\nz \\hat{u} - u_0+A \\hat{u}+\\gamma z^{\\alpha}A \\hat{u}=\\hat{w}(z).\n$$\nHence,\n$$\n\\hat{u}=\\frac{g(z)}{z}\\left(g(z)I+A\\right)^{-1} \\left( u_0+\\hat {w}(z)\\right),\n$$\nwhere $g(z)=\\dfrac{z}{1+\\gamma z^\\alpha}$.\nBy means of the inverse Laplace transform, we have\n\\begin{equation}\\label{form-1}\nu(t)=E(t)u_0+\\int_0^t E(t-s)f(u(s))\\,ds,\\quad t>0,\n\\end{equation}\n\\textcolor{black}{with} the operator $E(t):L^2(\\Omega)\\to L^2(\\Omega)$ being defined by\n$$\n E(t) = \\frac{1}{2\\pi i}\\int_{\\Gamma_{\\theta,\\delta}}e^{zt}\\frac{g(z)}{z}\\left(g(z)I+A\\right)^{-1} \\,dz,\n$$\n\\textcolor{black}{where, for fixed $\\delta>0$, \n$\\Gamma_{\\theta,\\delta}:=\\{\\rho e^{\\pm i\\theta}:\\; \\rho\\geq\\delta\\}\\cup \\{\\delta e^{i \\psi}:\\; |\\psi|\\leq\\theta\\} $\nis oriented with an increasing imaginary part.}\n\nThe following estimates hold, see \\cite[Theorem 2.1]{EJLZ2016}.\n\\begin{lemma}\\label{LL} The operator $E(t)$ satisfies\n\n$$ \\| \\partial_t^m {E}(t)v\\|_{\\dot{H}^p(\\Omega)}\\leq ct^{-m-(1-\\alpha)(p-q)\/2} \\|v\\|_{\\dot{H}^q(\\Omega)},$$\nwhere $m=0$ and $0\\leq q\\leq p\\leq 2$ or $m>0$ and $0\\leq p,\\, q \\, \\leq 2$.\n\\end{lemma}\nIn the sequel, we shall use the following generalization of Gr\\\"onwall's inequality \\cite{Amann}.\n\\begin{lemma}\\label{Gronwall}\nLet $T > 0$, $0 \\leq \\alpha, \\beta < 1$ and $A, B \\geq 0$. Then there is a positive\nconstant $C = C(T,B,\\alpha, \\beta)$ such that\n$$y(t)\\leq At^{-\\alpha} + B \\int_0^t (t- s)^{- \\beta}y(s)ds,\\quad\\ 0 < t \\leq T,$$\nimplies\n$$y(t) \\leq CAt^{-\\alpha},\\quad\\ 0 < t \\leq T.$$\n\\end{lemma}\nNote that, by the Lipschitz continuity of $f$, \n\\begin{equation*}\\label{f(u)}\n\\|f(u)\\|\\leq \\|f(u)-f(0)\\|+ \\|f(0)\\|\\leq L\\|u\\|+ \\|f(0)\\|.\n\\end{equation*}\nUsing \\eqref{form-1} and Lemma \\ref{LL}, we then get\n\\begin{eqnarray*}\n\\|u(t)\\| & \\leq & c\\|u_0\\| +c\\int_0^t \\|f(u(s))\\|\\,ds \\\\\n& \\leq & c\\|u_0\\| + ct\\|f(0)\\|+cL\\int_0^t\\|u(s)\\|\\,ds.\n\\end{eqnarray*}\n\\textcolor{black}{ By Lemma \\ref{Gronwall}, we obtain the stability result \n\\begin{eqnarray*}\n\\|u(t)\\| & \\leq & c(\\|u_0\\| + t\\|f(0)\\|).\n\\end{eqnarray*}}\nFurther properties of the solution $u$ are given below.\n\\begin{theorem}\\label{T-1} \nAssume $u_0\\in \\dot{H}^\\nu(\\Omega)$, $\\nu\\in [0,2]$. Then problem \\eqref{main} has a unique solution $u$ satisfying\n\\begin{equation}\\label{regularity-1a}\n u\\in C([0,T];\\dot{H}^\\nu(\\Omega))\\cap C((0,T];\\dot{H}^2(\\Omega)).\n\\end{equation}\nFurthermore, \n\\begin{equation}\\label{regularity-1b}\n \\| u(t) \\|_{\\dot{H}^p(\\Omega)} \\leq ct^{(\\alpha-1) (p-\\nu)\/2}, \\quad 0\\leq \\nu\\leq p\\leq 2,\n\\end{equation}\nand\n\\begin{equation}\\label{regularity-2}\n \\|u'(t) \\|_{\\dot{H}^p(\\Omega)} \\leq ct^{(\\alpha-1) (p-\\nu)\/2-1}, \\quad p\\in [0,1].\n\\end{equation}\nThe constant $c$ may depend on $T$.\n\\end{theorem}\n\\begin{proof} \n\nFor $\\nu\\in (0,2]$, the proof follows the same lines as that of Theorem 3.1 in \\cite{MK-2019}. The latter also covers the estimate \\eqref{regularity-1b} when $\\nu=0$, see Step 3 in that proof. Thus, we shall only prove \\eqref{regularity-2} for $\\nu=0$. To do so, we differentiate both sides of \\eqref{form-1} with respect to $t$ so that\n\\begin{equation}\\label{derv}\n\\begin{split}\nu'(t) =E'(t)u_0+ E(t)f(u_0)+\\int_0^t E(t-s)f'(u(s))u'(s)\\,ds.\n\\end{split}\n\\end{equation} \nMultiplying by $t$, we have\n$$tu'(t)=tE'(t)u_0+ tE(t)f(u_0)+\\int_0^t s E(t-s)f'(u(s))u'(s)\\,ds +\\int_0^t (t-s) E(t-s)f'(u(s))u'(s)\\,ds.$$\nFollowing \\cite[Lemma 5.2]{Mclean2010} and integrating by parts the last term on the right hand side, we get \n$$\\int_0^t (t-s) E(t-s)f'(u(s))u'(s)\\,ds= -tE(t)f(u_0)+\\int_0^t E(t-s)f(u(s))\\,ds+\\int_0^t (t-s) E'(t-s)f(u(s))\\,ds. $$\nHence,\n$$tu'(t)=tE'(t)u_0 +\\int_0^t s E(t-s)f'(u(s))u' (s)\\,ds+\\int_0^t E(t-s)f(u(s))\\,ds+\\int_0^t (t-s) E'(t-s)f(u(s))\\,ds.$$\nUsing Lemma \\ref{LL}, we thus deduce that \n$$\\Vert tu'(t)\\Vert \\leq c +c \\int_0^t \\Vert s\\, u'(s) \\Vert\\,ds, $$\nwhich, by Lemma \\ref{Gronwall}, implies that $\\Vert tu'(t)\\Vert \\leq c.$\nThe $H^1(\\Omega)$-estimate $\\| \\nabla u'(t)\\| \\leq c t^{(\\alpha-1)(1-\\nu)\/2-1}$ is derived in a similar manner. The desired estimate \\eqref{regularity-2} follows then by interpolation, which completes the proof.\n\\end{proof}\n\n\\begin{comment}\n\\begin{remark}\nInterpolating the estimates in Theorem \\ref{T-1} imply that the solution to problem \\eqref{main} satisfies for $u_0\\in \\dot{H}^\\nu$\n\\begin{equation}\\label{regul-3}\n \\| u^{(m)}(t) \\|_p \\leq ct^{(\\alpha-1) (p-\\nu)\/2-m}, \n\\end{equation}\nwhere $m=0,1$ and $ p\\in [0,1].$\n\\end{remark}\n\\end{comment}\n\n\n\n\n\n\\section{Semidiscrete FE scheme} \\label{sec:FE}\n\\setcounter{equation}{0}\n\nLet $\\mathcal{T}_h$ be a shape regular and quasi-uniform triangulation of the domain $\\bar\\Omega$ into triangles $K,$\nand let $h=\\max_{K\\in \\mathcal{T}_h}h_{K},$ where $h_{K}$ denotes the diameter of $K.$ \nThe approximate solution $u_h$ of the Galerkin FEM will be sought in the FE space $V_h$ of continuous piecewise linear functions over the triangulation $\\mathcal{T}_h$\n$$V_h=\\{v_h\\in C^0(\\overline {\\Omega})\\;:\\;v_h|_{K}\\;\\mbox{is linear for all}~ K\\in \\mathcal{T}_h\\; \\mbox{and} \\; v_h|_{\\partial \\Omega}=0\\}.$$\nThe semidiscrete Galerkin FEM for problem (\\ref{main}) now reads: find $u_h(t) \\in V_h$ such that\n\\begin{equation} \\label{semi-1}\n(\\partial_t u_{h} ,\\chi)+ a( u_h,\\chi)+ \\gamma a(\\partial_t^{\\alpha} u_h,\\chi)= (f(u_h),\\chi)\\quad\n\\forall \\chi\\in V_h,\\quad t\\in (0,T], \\quad u_h(0)=P_h u_0,\n\\end{equation}\nwhere $(\\cdot,\\cdot)$ is the inner product in $L^2(\\Omega)$, \n $a(v,w):= (\\nabla v, \\nabla w)$ \n and \n$P_h:L^2(\\Omega)\\rightarrow V_h$ is the orthogonal $L^2(\\Omega)$-projection.\nUpon introducing the discrete operator $A_h:V_h\\rightarrow V_h$ defined by\n\\begin{equation*} \n(A_h\\psi,\\chi)=(\\nabla \\psi,\\nabla \\chi) \\quad \\forall \\psi,\\chi\\in V_h,\n\\end{equation*}\nthe spatially discrete problem (\\ref{semi-1}) is equivalent to\n\\begin{equation} \\label{semi-2}\n \\partial_t u_{h}(t)+ ( 1+ \\gamma \\partial_t^{\\alpha})A_h u_h= P_h f(u_h(t)),\\quad t\\in (0,T], \\quad u_h(0)=P_hu_0.\n\\end{equation}\nFollowing the analysis in the previous section, we represent the solution of \\eqref{semi-2} as \n\\begin{equation}\\label{form-1d}\nu_h(t)=E_h(t)P_hu_0+\\int_0^t { E}_h(t-s)P_hf(u_h(s))\\,ds,\n\\end{equation}\nwhere $E_h(t):V_h\\to V_h$ is defined by\n$$\n E_h(t) = \\frac{1}{2\\pi i}\\int_{\\Gamma_{\\theta,\\delta}}e^{zt} \\frac{g(z)}{z}\\left(g(z)I+A_h\\right)^{-1} \\,dz.\n$$\n\n\nIn order to bound the FE error $e_h(t):=u_h(t)-u(t)$, we introduce the operator \n $$S_h(z):=(g(z) I+A_h)^{-1}P_h-(g(z) I+A)^{-1},$$ \nwhich satisfies the following properties, see \\cite{LST-1996}.\n\\begin{lemma}\\label{G} \nThe following estimate holds for all $z\\in\\Sigma_\\theta$,\n\\begin{equation*} \n\\|S_h(z)v\\|+h\\|\\nabla S_h(z)v\\|\\leq ch^2 \\|v\\|,\n\\end{equation*}\nwhere $c$ is independent of $h$.\n\\end{lemma}\nLet $F_h(t)=E_h(t)P_h-E(t)$. Then, by Lemma \\ref{G}, $F_h(t)$ satisfies\n\\begin{equation} \\label{0-p}\n\\|F_h(t)v\\|+h\\|\\nabla F_h(t)v\\| \\leq ct^{-(1-\\alpha)(1-\\nu\/2)} h^2 \\|v\\|_{\\dot H^\\nu(\\Omega)},\\quad \n{ \\nu \\in [0,2]}.\n\\end{equation}\nNow we are ready to prove an error estimate for the semidiscrete problem \\eqref{semi-2}.\n\\begin{theorem}\\label{thm:semi} Let $u_0\\in \\dot H^\\nu(\\Omega)$, $\\nu\\in [0,2]$.\nLet $u$ and $u_h$ be the solutions of problems \\eqref{main} and \\eqref{semi-2}, respectively. \nThe\n\\begin{equation} \\label{01-bb}\n\\|e_h(t)\\|+h\\|\\nabla e_h(t)\\|\\leq ch^2 t^{-(1-\\alpha)(1-\\nu\/2)}, \\quad\\ t\\in (0,T].\n \\end{equation}\n\\end{theorem}\n\\begin{proof} Set $\\beta= (1-\\alpha)(1-\\nu\/2)$. Then, from \\eqref{form-1} and \\eqref{form-1d}, we obtain after rearrangements \n\\begin{equation*}\ne_h(t)= F_h(t)u_0+\\int_0^t {E}_h(t-s) P_h [f(u_h(s))-f(u(s))]\\,ds+\\int_0^t {F}_h(t-s) f(u(s))\\,ds.\n\\end{equation*}\nUsing the properties of $F_h$ in \\eqref{0-p} and the boundedness of $\\|E_h(s)\\|$ and $\\|f(u(s))\\|$, we deduce \n\\begin{eqnarray*}\n\\|e_h(t)\\|&\\leq & \\| {F}_h(t) u_0\\|+cL \\int_{0}^{t} \\|e(s)\\|\\,ds + \\int_{0}^{t } \\| {F}_h(t-s) f(u(s))\\|\\,ds\\\\\n&\\leq & ch^2t^{-\\beta} \\| u_0\\|_{\\dot{H}^\\nu(\\Omega)}+cL \\int_{0}^t \\|e(s)\\|\\,ds+ch^2\\int_{0}^{t}(t-s)^{\\alpha-1}ds \\\\\n&\\leq& ch^2t^{-\\beta}+cL \\int_{0}^t \\|e(s)\\|\\,ds+ch^2.\n\\end{eqnarray*}\nAn application of Lemma \\ref{Gronwall} yields\n$\n\\|e_h(t)\\| \\leq ch^2t^{-\\beta}.\n$\nThe $ H^1(\\Omega)$-error estimate is derived analogously, which completes the proof.\n\\end{proof}\n\n\n\n\\begin{comment}\n\\begin{remark}\\label{remark-2} \nIf $u_0\\in \\dot H^2(\\Omega)$, then one can choose the approximation $u_h(0)=R_hu_0$ in Theorem \\ref{thm:semi}. Indeed, let $\\tilde{u}_h$ denote the solution of $\\eqref{semi-2}$ with the initial condition $\\tilde{u}_h(0)=R_h u_0$. Then, $\\xi := u_h- \\tilde{u}_h$ satisfies\n$$\n \\partial_t^{\\alpha} \\xi(t)+ A_h \\xi(t)= P_h (f(u_h(t))-f(\\tilde{u}_h(t))),\\quad t\\in (0,T], \\quad \\xi(0)=P_hu_0-R_hu_0.\n$$\nBy the Lipschitz continuity of $f$ and the estimates in Lemma \\ref{LL}, we get\n$$\n\\|\\xi(t)\\|\\leq c\\|\\xi(0)\\|+c\\int_0^t (t-s)^{\\alpha-1}\\|\\xi(s)\\|\\,ds.\n$$\nSince $\\|\\xi(0)\\|\\leq ch^2 \\|u_0\\|_{\\dot H^2(\\Omega)}$ by Lemma \\ref{PR}, an application of Lemma \\ref{Gronwall} yields\n$\\|\\xi(t)\\|\\leq c_Th^2 \\|u_0\\|_{\\dot H^2(\\Omega)}$. The desired estimate follows then by the triangle inequality.\n\\end{remark}\n\\end{comment}\n\n\n\n\\section {Time discretization}\\label{sec:TD}\n\\setcounter{equation}{0}\nThis section is devoted to the analysis of a convolution quadrature time discretization for \\eqref{semi-2} generated by the backward Euler (BE) method. Let $0 = t_0 < t_1 < . . . < t_N = T$ be a uniform partition of the time interval $[0, T]$, with grid\npoints $t_n = n\\tau$ and step size $\\tau = T\/N$.\n Integrating both sides of \\eqref{semi-2} over $(0,t)$, we get \n$$u_h(t) -u_h(0) +(\\partial_t^{-1}+\\gamma\\partial_t^{\\alpha-1} )A _h u_h(t)=\\partial_t^{-1}P_hf(u_h(t)).$$ \nThe fully discrete problem is then obtained by approximating the continuous integral by the convolution quadratures $\\partial_\\tau^{-1} $, $ \\partial_\\tau^{\\alpha-1}$ and $\\partial_\\tau^{-1} $, respectively, generated by the BE method, see \\cite{Lubich-2004,Lubich-2006}. \nThe resulting time-stepping scheme reads: with $U_h^0=P_hu_0$, find $U^n_h\\in V_h$, $n = 1, 2, \\ldots,N$, such that\n\\begin{equation} \\label{fully-1}\n U^n_h -U^0_h +(\\partial_\\tau^{-1}+\\gamma \\partial_\\tau^{\\alpha-1} )A_h U^n_h=\\partial_\\tau^{-1}P_hf(U_h^{n}).\n\\end{equation} \nWe shall investigate a linearized version of \\eqref{fully-1} defined by:\nwith $U_h^0=P_hu_0$, find $U^n_h$, $n = 1, 2, \\ldots,N$, such that\n\\begin{equation} \\label{fully-2}\nU^n_h -U^0_h +(\\partial_\\tau^{-1}+\\gamma \\partial_\\tau^{\\alpha-1} )A_h U^n_h=\\partial_\\tau^{-1}P_hf(U_h^{n-1}).\n\\end{equation}\nIn an expanded form, we have \n$$\nU_h^n-U_h^0+\\tau A_h\\sum_{j=0}^n q_{n-j}^{(1)} U^n_h+\\gamma\\tau^{1-\\alpha}A_h \\sum_{j=0}^n q_{n-j}^{(1-\\alpha)} U_h^j= \\tau \\sum_{j=1}^n q_{n-j}^{(1)} f_h(U_h^{j-1}),\n$$\nwhere $f_h=P_hf$ and $q_{j}^{(\\alpha)}= (-1)^{j}\n\\left(\\begin{array}{c}\n-\\alpha\\\\\nj\n\\end{array}\\right),$ see \\cite{Lubich-2004,Lubich-2006}.\nRewriting \\eqref{fully-2} as\n\\begin{equation}\\label{semi-1b}\n U_h^n =(I+(\\partial_\\tau^{-1}+\\gamma\\partial_\\tau^{\\alpha-1}) A_h)^{-1}\\left( U_h^0 + \\partial_\\tau^{-1} f_h(U_h^{n-1})\\right) ,\n\\end{equation}\nand noting that $U_h^n$ depends linearly and boundedly on $U_h^0$, and $ f_h(U_h^{j-1})$, $1\\leq j\\leq n$, \nwe deduce the existence of linear and bounded operators $P_n$ and $R_n:V_h\\to V_h$, $n\\geq 0$, such that $U_h^n$ is represented by\n\\begin{equation}\\label{semi-1e}\nU_h^n = P_n U_h^0 + \\tau \\sum_{j=1}^n R_{n-j} f_h(U_h^{j-1}),\n\\end{equation}\nsee \\cite[Section 4]{Lubich-2006}.\n The operators $\\tau R_n$, $n\\geq 0$, in \\eqref{semi-1e} are the convolution quadrature weights corresponding to the Laplace transform\n$K(z)=z^{-1}(I+(z^{-1}+\\gamma z^{\\alpha-1}) A_h)^{-1}$.\n Since $\\|K(z)\\|\\leq c|z|^{-1}$, an application of Lemma 3.1 in \\cite{Lubich-2006}, with $\\mu=1$, shows that there is a constant $B>0$, independent of $\\tau$, such that \n\\begin{equation}\\label{R_n}\n\\|R_n\\|\\leq B ,\\quad n=0,1,2,\\ldots.\n\\end{equation}\n\n\nFor the error analysis, we introduce the intermediate $v_h(t)\\in V_h$ satisfying \n\\begin{equation} \\label{vva}\n\\partial_t v_h+(1+\\gamma\\partial_t^{\\alpha})A_hv_h=P_hf(u(t)),\\quad v_h(0)=P_h u_0,\n\\end{equation} \nand the discrete solution $v_h^n\\in V_h$ defined by \n\\begin{equation} \\label{vv}\n\\partial_\\tau v_h^n+(1+\\gamma\\partial_\\tau^{\\alpha})A_hv_h^n=P_hf(u(t_n)),\\quad n\\geq 1,\\quad v_h^0=U_h^0.\n\\end{equation} \nThen an estimation of $u(t_n)-v_h^n$ is given in the following lemma.\n\\begin{lemma} Let $v_h^n$ be the solution to problem \\eqref{vv} with $u_0\\in \\dot{H}^\\nu(\\Omega)$, $\\nu\\in(0,2]$. Then there holds\n\\begin{equation} \n\\begin{split} \\label{vv-1}\n\\|u(t_n)-v_h^n\\|\\leq & ct_n^{(1-\\alpha)\\nu\/2-1}\\tau+c t_n^{-(1-\\alpha)(2-\\nu)\/2}h^2.\n\\end{split} \n\\end{equation}\n\\end{lemma}\n\\begin{proof} \nNote that \\eqref{vva} and \\eqref{vv} can be seen as semidiscrete and full discretizations of \\eqref{main} with a given right-hand side function $f(u(t))$, respectively. For the homogeneous case $f=0$, the bound \\eqref{vv-1} can be found in \\cite[Remark 4.3]{EJLZ2016}. For the inhomogeneous case with $u_0=0$, we consider the splitting\n$$\nu(t_n)-v_h^n=(u(t_n)-v_h)+(v_h-v_h^n)=:I_1+I_2.\n$$\nThen, from the proof of Theorem \\ref{T-1}, it is easily seen that $\\|I_1\\|\\leq ch^2$.\nTo estimate $\\|I_2\\|$, we follow the arguments in the proof of \\cite[Theorem 3.6]{JLZ2016} with $G(z)=\\frac{g(z)}{z}(g(z)I+A_h)^{-1}$. Using the bound $\\| u'(s)\\|\\leq cs^{(1-\\alpha)\\nu\/2-1}$ in Theorem \\ref{T-1}, we then deduce that\n\\begin{eqnarray*} \n\\|I_2\\| &\\leq & c\\tau\\|f(u_h(0))\\|+ c\\tau\\int_0^{t_n}\\|f'(u(s)) u'(s)\\|\\,ds\\leq c t_n^{(1-\\alpha)\\nu\/2}\\tau,\n\\end{eqnarray*} \nwhich completes the proof.\n\\end{proof}\n\\begin{remark}\nThe bound for $\\|I_2\\|$ does not hold when $\\nu=0$, i.e., $u_0\\in L^2(\\Omega)$. This is due to the strong singularity in the bound of $\\| u'(s)\\|$.\n\\end{remark}\n\nNow we are ready to derive error estimates for the linearized time-stepping scheme \\eqref{fully-2}.\n\\begin{theorem}\\label{thm:fully-2} Let $u_0\\in \\dot H^\\nu(\\Omega)$, $\\nu\\in (0,2]$.\nThen the fully discrete scheme \\eqref{fully-2} has a unique solution $U_h^n\\in V_h$, $00, \\label{f}}\n\\end{maxi!}\nwhere $P$ equals to the total transmit power. In the above optimization problem, the constraints~(\\ref{b}) and (\\ref{d}) ensure that all elements of $\\mathbf{F}_\\text{RF}$ and $\\mathbf{w}_n$ have an equal norm. Further, the constraint~(\\ref{c}) ensures that the total power of the hybrid transmitter is limited to $N$. The constraint~(\\ref{e}) guarantees that the total transmit power is limited to $P$. Finally, (\\ref{f}) ensures that the allocated power to U$_{n,m}$ is greater than zero. One would add fairness constrain to the maximization problem. Ref.~\\cite{8125754} discusses a viable solution in this case. In particular, a weighted sum-rate which considers a special priority for each user is utilized. Also, to ensure that all the users achieve a predefined minimum rate $R_\\text{min}$, another constrain can be included in the problem~(\\ref{eq:opt}) such that $R_{n,m} \\ge R_\\text{min}$. In this case, an iterative algorithm that properly allocates the power is required~\\cite{zhang2016robust}. Without loss of generality, here, we assume that all the users satisfy $R_{n, m}\\ge R_\\text{min}$. \n\nIt is mentioned that transmission in mmWave bands happens through LoS and NLoS channels. In particular, the users which are located far from the BS will mostly be supported via NLoS channels~\\cite{7593259}. Let first focus on only LoS channels. We assume that all channels are LoS and the effective channels are perfectly aligned as shown in Fig.~\\ref{fig:system}. By perfect alignment we mean that $\\mathbf{a}_\\text{BS}(\\varphi_{n, m})$ is identical for all users in the $n$th cluster, i.e., $\\mathbf{a}_\\text{BS}(\\varphi_{n, 1}) = \\mathbf{a}_\\text{BS}(\\varphi_{n, 2}) = \\dots = \\mathbf{a}_\\text{BS}(\\varphi_{n, M_n})$ for $n = 1, 2, \\dots, N$.\n\nIn general, there are two extreme cases to design baseband precoder for mmWave-NOMA systems, strong effective channel-based and singular value decomposition (SVD)-based precoder methods~\\cite{wang2017spectrum}. The strong effective channel-based is designed for only LoS channels and the SVD-based precoder is designed for only NLoS channels. Further, to the best of authors' knowledge, it is not shown how to design the SVD-based RF precoder for hybrid beamforming system. Here, in order to understand the behavior of beam misalignment in HB-NOMA systems we choose the strong effective channel-based precoder which is widely used in the literature~\\cite{wang2017spectrum,hao2017energy,wu2017non}. \n\nThe maximization problem in~(\\ref{eq:opt}) is non-convex and finding the optimal solution is not trivial.\nTo ease, we present an efficient and simple algorithm in three steps as described below.\n\nIn the first step, the BS and U$_{n, m}$ solve the following problem\n\\begin{align}\\label{eq7}\n \\underset{\\mathbf{w}_{n, m}, \\mathbf{f}_\\text{RF}^{n, m}}{\\text{maximize}} \\quad \\left|\\mathbf{w}_{n, m}^\\dagger\\mathbf{H}_{n, m}\\mathbf{f}_\\text{RF}^{n, m}\\right| \\qquad\n \\text{subject to (\\ref{b}) and (\\ref{d})}.\n\\end{align}\nSince the channel $\\mathbf{H}_{n, m}$ has only one path, and given the continuous beamsteering capability assumption, in view of \\eqref{eq4}, $\\mathbf{w}_{n, m}=\\mathbf{a}_\\text{U}(\\vartheta_{n, m})$ and ${\\mathbf{f}}_\\text{RF}^{n, m} = \\mathbf{a}_\\text{BS}(\\varphi_{n, m}),$ are the optimal solutions~\\cite{alkhateeb2015limited}. We design the RF (analog) and baseband (digital) precoders using the adopted strong effective channel-based method. Hence, in order to design the RF precoder, the BS selects the first user of each cluster. The RF precoder of the first user of the $n$th cluster makes the $n$th column of the RF precoding matrix, i.e., ${\\mathbf{f}}_\\text{RF}^{n, 1}$, gives the RF precoding matrix as\n \\begin{equation}\\label{eq81}\n \\mathbf{F}_\\text{RF} = \\left[{\\mathbf{f}}^{1, 1}_\\text{RF}, {\\mathbf{f}}^{2, 1}_\\text{RF}, \\dots, {\\mathbf{f}}^{N, 1}_\\text{RF}\\right].\n \\end{equation}\nThe first user is determined based on the locations of the user as follows:\n\\begin{equation}\\label{eq8}\n \\left|\\beta_{n, 1}\\right| \\geq \\left|\\beta_{n, 2}\\right| \\geq \\dots \\geq \\left|\\beta_{n, M_n}\\right|, \\quad \\text{for} \\quad n = 1, 2, \\dots, N, \n\\end{equation}\nwhere $\\beta_{n, m}$ is the gain factor defined in~(\\ref{eq4}). To determine the first user, the BS does not need to know the channel gain of the users. Recall that the channel gain $\\beta_{n,m}$, defined in~(\\ref{eq4}), mainly depends on distance between the BS and U$_{n,m}$ ($d$) and path loss factor ($\\nu$). Since the path loss factor is identical for all users, the first user of each cluster can be determined as the closest user to the BS such that its channel gain has the highest amplitude among the users in the same cluster. While the purpose of ordering in~(\\ref{eq8}) is to define the first user, to realize NOMA, another ordering method based on the effective channel gain is presented in the third step. It should be stressed that the main reason to design the digital precoder with respect to the strongest channel is that the strongest user must decode the other users' signal before its signal. So, the power of this user's signal is not affected by other clusters' signal. More details will be provided in Section~\\ref{sec:lower}.\n\nIn the second step, the effective channel for U$_{n, m}$ is expressed as\n\\begin{align}\\label{eq9}\n \\overbar{\\mathbf{h}}_{n, m}^\\dagger &= \\mathbf{w}_{n, m}^\\dagger\\mathbf{H}_{n, m}\\mathbf{F}_\\text{RF}= \\sqrt{N_\\text{BS}N_\\text{U}}\\beta_{n, m}\\mathbf{a}_\\text{BS}^\\dagger(\\varphi_{n, m})\\mathbf{F}_\\text{RF}.\n\\end{align}\nRegarding the strongest channel-based method, we write the effective channel matrix as\n\\begin{equation}\\label{eq91}\n\\overbar{\\mathbf{H}} = \\left[\\overbar{\\mathbf{h}}_{1, 1}, \\overbar{\\mathbf{h}}_{2, 1}, \\dots, \\overbar{\\mathbf{h}}_{N, 1} \\right]^\\dagger, \n\\end{equation}\nwhere $\\overbar{\\mathbf{h}}_{n, 1}$ denotes the effective channel vector of U$_{n, 1}$.\n\nDesigning a proper digital precoder $\\mathbf{F}_\\text{BB}$ can reduce the inter-cluster interference. In brief, designing the baseband precoder becomes equivalent to solving\n\\begin{equation}\\label{eq10}\n \\underset{\\{\\mathbf{f}_\\text{BB}^\\ell\\}_{\\ell\\neq n}}{\\text{minimize}} \\ I_\\text{inter}^{n, m} \\qquad \\text{subject to (\\ref{c})}.\n\\end{equation}\nwhere $I_\\text{inter}^{n, m}$ is defined in~(\\ref{eq61}). We notice that so far we have designed the analog beamformer and combiner. The only unknown parameter is the digital beamformer. In this paper, we adopt zero-forcing beamforming (ZFBF) which makes a balance between implementation complexity and performance~\\cite{spencer2004zero ,yoo2006optimality}. Based on ZFBF, the solution for (\\ref{eq10}) is obtained as~\\cite{alkhateeb2015limited}\n\\begin{equation}\\label{eq11}\n \\mathbf{F}_\\text{BB} = \\overbar{\\mathbf{H}}^\\dagger\\left(\\overbar{\\mathbf{H}}\\overbar{\\mathbf{H}}^\\dagger\\right)^{-1}\\bf{\\Gamma}, \n\\end{equation}\nwhere the diagonal elements of $\\mathbf{\\Gamma}$ are given by~\\cite{alkhateeb2015limited}\n\\begin{equation}\\label{eq12}\n \\mathbf{\\Gamma}_{n, n} = \\sqrt{\\frac{N_\\text{BS}N_\\text{U}}{\\left(\\mathbf{F}^{-1}\\right)_{n, n}}}\\left|\\beta_{n, 1}\\right|, \\quad \\text{for} \\quad n = 1, 2, \\dots, N.\n\\end{equation}\nwhere $\\mathbf{F}=\\mathbf{F}_\\text{RF}^\\dagger\\mathbf{F}_\\text{RF}$. The determined precoder in~(\\ref{eq11}) indicates that inter-cluster interference on first users is zero, i.e., $\\overbar{\\mathbf{h}}^\\dagger_{n, 1}\\mathbf{f}^\\ell_\\text{BB} = 0$ for $n = 1, 2, \\dots, N$ and $\\ell \\neq n$. That is, inter-cluster interference is perfectly eliminated on the first users. This completes our justification about the orienting the beams toward the first users and choosing their effective channel vector in designing $\\mathbf{F}_\\text{BB}$.\n\nIn the third step, the BS first reorders the users then allocates the power. The reordering process is done based on the effective channel vectors as\n\\begin{equation}\\label{eq121}\n \\norm[\\big]{\\overbar{\\mathbf{h}}_{n, 1}} \\geq \\norm[\\big]{\\overbar{\\mathbf{h}}_{n, 2}}\\geq \\dots\n \\geq \\norm[\\big]{\\overbar{\\mathbf{h}}_{n, M_n}}, \\quad \\text{for} \\quad n = 1, 2, \\dots, N.\n\\end{equation}\nNotice that in (\\ref{eq8}) we aimed to find the first users based on the large-scale gain. However, in HB-NOMA the power allocation is conducted based on order of the effective channel gains. It is not irrational to assume that the BS knows the effective channels. This can be done through the channel quality indicator (CQI) messages~\\cite{chen2017exploiting}. Each user feeds the effective channel back to the BS then it sorts the users.\n\nThe optimal power allocation in~(\\ref{eq:opt}) can be done by solving the following problem.\n\\begin{align}\\label{eq13}\n \\underset{P_{n, m}}{\\text{maximize}} \\ \\sum_{n=1}^{N}\\sum_{m=1}^{M_n} R_{n, m} \\qquad \\text{subject to (\\ref{e}) and (\\ref{f})}.\n\\end{align}\nTo solve the problem, we propose a two-stage solution. First the BS divides the power between the clusters considering their users' channel gain as follows.\n\\begin{equation}\\label{equPowerAllocation}\n P_n = \\frac{\\displaystyle\\sum_{m=1}^{M_n}\\norm[\\big]{\\overbar{\\mathbf{h}}_{n, m}}^2}{\\displaystyle\\sum_{n=1}^N\\sum_{m=1}^{M_n}\\norm[\\big]{\\overbar{\\mathbf{h}}_{n, m}}^2}P, \\quad \\text{for} \\quad n=1, 2, \\dots, N.\n\\end{equation}\nThen a fixed power allocation is utilized for the users in each cluster respecting the constraint $\\sum_{m=1}^{M_n} P_{n, m} = P_n$. To determine $P_{n,m}$, one solution is to allocate a certain amount of power for each U$_{n,m}$ except the first one that only satisfies $R_{n,m}=R_\\text{min}$, then the remaining is assigned to U$_{n,1}$. This power allocation process is in consist with the concept of NOMA in which, to achieve higher sum-rate, the stronger user should receive more power~\\cite{saito2013system, saito2013non, ding2014performance, higuchi2015non}. On the other hand, recall that mmWave channels are vulnerable to blockage and shadowing. Especially, for the weak users which are located far from the BS, this issue becomes worse. So, the weak users may not be able to achieve the required minimum rate. Another solution is to give priority to the fairness issue. To this, we need to allocate less power to the strong users and more power to the weak users. It turns out, fairness works against achieving maximum rate. Thus, our solution to achieve maximum rate and compensate for the mmWave propagation issues is to assign the same amount of power for all the users, i.e.,\n\\begin{equation}\n P_{n,1} = P_{n,2} = \\cdots = P_{n,M_n}.\n\\end{equation}\n\n\\subsection{The Achievable Rate Analysis}\nIn this section, the achievable rate of U$_{n,m}$ is evaluated with respect to the designed parameters. We derive a lower bound which characterizes insightful results on the achievable rate of HB-NOMA.\n\n\\begin{theorem}\\label{theo:1}\n\\normalfont\nWith perfect beam alignment, a lower bound on the achievable rate of U$_{n, m}$ is given by\n\\begin{equation}\\label{eq14}\n \\overbar{R}_{n, m} \\geq \\text{log}_2\\left(1 + \\frac{P_{n, m}N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2}{\\displaystyle\\sum_{k=1}^{m-1}P_{n, k}N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2 + \\sigma^2 \\kappa_\\text{min}^{-1}(\\mathbf{F})}\\right), \n\\end{equation}\n$\\kappa_\\text{min}(\\mathbf{F})$ denotes the minimum eigenvalue of $\\mathbf{F}$.\n\\end{theorem}\n\\begin{proof}\nPlease see Appendix~\\ref{app:Theorem1}.\n\\end{proof}\n\n\\begin{remark}\\label{remark:1}\n\\normalfont\nTheorem 1 indicates that when the alignment between the users in each cluster is perfect, still two terms degrade the sum-rate performance of every HB-NOMA user. The first term $\\sum_{k = 1}^{m-1}P_{n, k}N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2 $ is due to using NOMA scheme which leads to inevitable intra-cluster interference. The second term $\\kappa_\\text{min}^{-1}(\\mathbf{F})$ is due to realizing the beamforming with digital and analog components, i.e., hybrid beamforming instead of fully-digital components. It is worth mentioning that in the fully-digital beamforming the first term exists but the second term is always one. Therefore, even under perfect beam alignment assumption the hybrid beamforming intrinsically imposes small loss on the achievable rate.\n\\end{remark}\n\\section{Beam Misalignment: Modeling, Rate Analysis, and Rate Gap}\\label{sec:lower}\n\\begin{figure}\n \n \n \\centering\n \\includegraphics[scale=1.3]{image\/Misalignment.pdf}\n \\caption{Beam misalignment in mmWave communications due to the NLoS channels. The NLoS channels are caused by blockages B1 and B2.}\n \\label{fig:misalignment}\n \\end{figure}\nIn the previous section we designed the precoders when only LoS channels exist and the users are perfectly aligned. The precoders are found based on the strongest effective channel. Perfect alignment is an ideal assumption. In fact, AoDs\/AoAs are random variable and with almost surely the probability of occurring different AoDs\/AoAs even in LoS channels is one which leads to $\\mathbf{a}_\\text{BS}(\\varphi_{n, 1}) \\neq \\mathbf{a}_\\text{BS}(\\varphi_{n, 2}) \\neq \\dots \\neq \\mathbf{a}_\\text{BS}(\\varphi_{n, M_n})$ for $n = 1, 2, \\dots, N$. On the other hand, recall that in mmWave frequencies, due to shadowing and blockage, NLoS channels are inevitable~\\cite{7593259}. These channels force the users to indirectly steer their beam toward the BS as illustrated by Fig.~\\ref{fig:misalignment}. So, the misalignment between the effective channel of the first user and the users with misaligned LoS and NLoS channel in each cluster causes the digital baseband precoder cannot eliminate the inter-cluster interference. As a result, the achievable rate is degraded. In this section, first the misalignment is modeled. Second, using the derived model, a lower bound is found for the rate. Finally, an upper bound is extracted for the rate gap between the perfect alignment and misalignment. \n\n\\begin{remark}\n\\normalfont\nWhile our findings in this section are general and hold for misaligned LoS and NLoS channels, we only concentrate on NLoS channels. Thus, by LoS channel we mean a perfectly aligned channel. Also, it is assumed that all users expect the first one in all clusters have NLoS channels. In order to distinguish effective channel of the users with aligned LoS channels from NLoS channels, hereafter, we denote $\\overbar{\\mathbf{h}}_{n, m}$ as effective channel of the user with perfect beam alignment and $\\tilde{\\mathbf{h}}_{n, m}$ as effective channel of the user with imperfect beam alignment. Also, $\\overbar{R}_{n, m}$ and $\\tilde{R}_{n, m}$ denote the rate of U$_{n, m}$ with LoS and NLoS channel, respectively. \n\\end{remark}\n\n\\subsection{Beam Misalignment Modeling}\nIn what follows, we study the impact of imperfect beam alignment on the rate. Before that, we calculate the norm of the effective channel defined in~(\\ref{eq9}). Defining\n \\begin{equation}\\label{eqFejer}\n \\left|\\mathbf{a}_\\text{BS}^\\dagger(\\varphi_{n, m})\\mathbf{a}_\\text{BS}(\\varphi_{\\ell, 1})\\right|^2 = K_{N_\\text{BS}}(\\varphi_{\\ell, 1}-\\varphi_{n, m}), \n \\end{equation}\n where $K_{N_\\text{BS}}$ is Fej$\\acute{\\text{e}}$r kernel of order $N_\\text{BS}$~\\cite{strichartz2000way}, we get\n\\begin{equation}\\label{eq1601}\n\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2 = N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2\\displaystyle \\sum_{\\ell = 1}^N K_{N_\\text{BS}}\\left(\\varphi_{\\ell, 1}-\\varphi_{n, m}\\right).\n \\end{equation}\nNow, we model the correlation between the effective channels for U$_{n, m}$ and U$_{n, 1}$ and between U$_{n, m}$ and U$_{\\ell, 1}$ with $\\ell\\neq n$ by defining them as intra-cluster misalignment factor and inter-cluster misalignment factor, respectively. Notice that we consider the worst scenario. That is, U$_{n, m}$ for $m=2, 3, \\dots, M_n$ receives the signal through NLoS channel, while only U$_{n, 1}$ for $n=1, 2, \\dots, N$ receives through LoS channel. Assuming LoS channel for the first users is reasonable, since in mmWave communications the users close to the BS experience LoS channels with high probability~\\cite{7593259}. \n\\begin{lemma}\\label{lemma:2}\n\\normalfont\nThe misalignment effective channel of U$_{n, m}$ and U$_{n, 1}$ can be modeled as\n \\begin{equation} \\label{eq19}\n \\hat{\\tilde{\\mathbf{h}}}_{n, m} = \\rho_{n, m}\\hat{\\tilde{\\mathbf{h}}}_{n, 1} + \\sqrt{1 - \\rho_{n, m}^2}\\hat{\\mathbf{g}}^{-n}_\\text{BS}, \n\\end{equation}\nwhere $\\hat{\\tilde{\\mathbf{h}}}_{n, m}$ denotes the normalized imperfect effective channel, $\\rho_{n,m}$ denotes the misalignment factor obtained as \n\\begin{equation} \\label{eqrho19}\n \\rho_{n,m} =\n \\frac{\\displaystyle\\sum_{i=1}^N\\kappa_i(\\mathbf{F})\\left|\\mathbf{a}_\\text{BS}^\\dagger(\\varphi_{n, m})\\mathbf{v}_1^i\\mathbf{v}_1^{i\\dagger}\\mathbf{a}_\\text{BS}(\\varphi_{n, 1})\\right|}{\\sqrt{\\displaystyle\\sum_{\\ell = 1}^N K_{N_\\text{BS}}\\left(\\varphi_{\\ell, 1}-\\varphi_{n,m}\\right)}\\sqrt{\\displaystyle\\sum_{\\ell = 1}^N K_{N_\\text{BS}}\\left(\\varphi_{\\ell, 1}-\\varphi_{n,1}\\right)}}, \n\\end{equation}\nwhere $\\kappa_i(\\mathbf{F})$ is the $i$th eigenvalue of $\\mathbf{F}$. $\\hat{\\mathbf{g}}^{-n}_\\text{BS}$ is a normalized vector located in the subspace generated by linear combination of ${{\\mathbf{a}}}_\\text{BS}(\\varphi_{\\ell,1})$ for $\\ell \\neq n$, such that $\\hat{\\mathbf{g}}^{-n}_\\text{BS}=\\frac{\\mathbf{g}^{-n}_\\text{BS}}{\\norm[\\big]{\\mathbf{g}^{-n}_\\text{BS}}},$ where $\\mathbf{g}^{-n}_\\text{BS} = \\sqrt{N_\\text{BS}N_\\text{U}}\\mathbf{F}_\\text{RF}^\\dagger\\sum_{\\ell=1,\\ell\\neq n}^N\\beta_{\\ell,1}\\mathbf{a}_\\text{BS}(\\varphi_{\\ell,1}).\n$\n\\end{lemma}\n\\begin{proof}\nPlease see Appendix~\\ref{app:lemma2}.\n \\end{proof}\n \\begin{comment}\n \\begin{figure}\n \n \\centering\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\raisebox{-\\height}{\\includegraphics[width=\\textwidth]{image\/mislaignment_32_1.eps}}\n \\caption{{\\fontsize{8}{8} \\selectfont $N_\\text{BS}=32$, $N_\\text{U}=8$ $\\varphi_{1,1}=60^\\circ$,\\\\ $\\varphi_{2,1}=75^\\circ$, $\\varphi_{3,1}=45^\\circ$, $\\varphi_{4,1}=35^\\circ$}}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\raisebox{-\\height}{\\includegraphics[width=\\textwidth]{image\/mislaignment_64_1.eps}}\n \\caption{{\\fontsize{8}{8} \\selectfont $N_\\text{BS}=64$, $N_\\text{U}=8$ $\\varphi_{1,1}=60^\\circ$,\\\\$\\varphi_{2,1}=75^\\circ$, $\\varphi_{3,1}=45^\\circ$, $\\varphi_{4,1}=35^\\circ$}}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.32\\textwidth}\n \\raisebox{-\\height}{\\includegraphics[width=\\textwidth]{image\/mislaignment_128_1.eps}}\n \\caption{{\\fontsize{8}{8} \\selectfont $N_\\text{BS}=128$, $N_\\text{U}=8$ $\\varphi_{1,1}=60^\\circ$,\\\\$\\varphi_{2,1}=75^\\circ$, $\\varphi_{3,1}=45^\\circ$, $\\varphi_{4,1}=35^\\circ$}}\n \\end{subfigure}\n \n\n\n\n\n\n\n\n\n \n \n \n \n \\caption{caption of main figure}\n \\label{fig:modeling}\n\\end{figure}\n \\end{comment}\n\\subsection{Rate Analysis}\nNow we are ready to find a lower bound for the achievable rate of U$_{n, m}$.\n\\begin{theorem}\\label{theo:2}\n\\normalfont\nWith imperfect beam alignment, a lower bound on the achievable rate of U$_{n, m}$, is given by\n\\begin{equation}\\label{eq16}\n \\tilde R_{n, m} \\geq \\text{log}_2\\left(1 +\\frac{P_{n, m}\\rho_{n, m}^2N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2}{\\zeta_\\text{intra}^{n, m}+ \\zeta_\\text{inter}^{n, m} + \\zeta_\\text{noise}^{n, m}}\\right), \n\\end{equation}\nwhere $\\zeta_\\text{intra}^{n, m} = \\sum_{k = 1}^{m-1}P_{n, k}\\rho_{n, m}^2N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2$\nand $\\zeta_\\text{inter}^{n, m} = \\left(1-\\rho_{n,m}^2\\right)N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2\\kappa_\\text{max}\\left(\\mathbf{S}\\right)\n\\kappa_\\text{min}^{-1}(\\mathbf{F}) \\times K_{N_\\text{BS}, 1}$ in which $\\kappa_\\text{max}\\left(\\mathbf{S}\\right)$ is the maximum eigenvalue of $\\mathbf{S} = \\mathbf{F}_\\text{BB}^{-n,W}\\mathbf{F}_\\text{BB}^{-n,W\\dagger}$, $\\mathbf{F}_\\text{BB}^{-n,W}$ denotes the wieghted $\\mathbf{F}_\\text{BB}$ after eliminating the $n$th column where the columns are scaled by $P_\\ell~\\forall\\ell\\neq n$. Also, for some $m$ we define\n\\begin{equation}\\label{eq163}\nK_{N_\\text{BS}, m} = \\displaystyle \\sum_{\\ell = 1}^N K_{N_\\text{BS}}\\left(\\varphi_{\\ell, 1}-\\varphi_{n, m}\\right), \n\\end{equation}\nwhere $K_{N_\\text{BS}}\\left(\\varphi_{\\ell, 1}-\\varphi_{n, m}\\right)$ denotes the Fej$\\acute{\\text{e}}$r kernel in~(\\ref{eqFejer}). Finally, $\\zeta_\\text{noise}^{n, m}$ is expressed as $\\zeta_\\text{noise}^{n, m} = \\sigma^2\\kappa_\\text{min}^{-1}(\\mathbf{F})K_{N_\\text{BS}, 1}K^{-1}_{N_\\text{BS}, m},$\nwhere $K_{N_\\text{BS}, m}$ is defined in~(\\ref{eq163}).\n \\end{theorem}\n\\begin{proof}\nPlease see Appendix~\\ref{app:theorem2}.\n\\end{proof}\n\n\\begin{remark}\\label{remark:2}\n\\normalfont\nSince for U$_{n, 1}$ the factor $\\rho_{n, 1}$ is one, we have $\\overbar{\\mathbf{h}}_{n, 1} = \\tilde{\\mathbf{h}}_{n, 1}$. Thus, Theorem~\\ref{theo:1} is still valid for these users. \n\\end{remark}\n\n\\begin{remark}\\label{remark:3}\n\\normalfont\nTheorem~\\ref{theo:2} states that the achievable rate of each user depends on the intra-cluster and inter-cluster misalignment factors, and a weak alignment reduces the power of the effective channel of that user. Intra-cluster and inter-cluster power allocation are other parameters that affect the achievable rate as seen in~(\\ref{eq16}). Further, the bound shows that the maximum eigenvalue of the baseband precoder is important in maximizing the achievable rate. That is to say, the effective channel matrix should be designed in a way that the eigenvalues of the baseband precoder are as close as possible to each other. This is because if eigenvalues are far from each other, the maximum eigenvalue will be large. This increases the value of $\\zeta_\\text{inter}^{n, m}$ which causes less achievable rate. \n\\end{remark}\n\nTo gain some insight into the effect of beam misalignment, we extract a lower bound for the rate gap when U$_{n, m}$ receives the signal via LoS and NLoS channel. \n\\begin{theorem}\\label{theo:3}\n\\normalfont\nThe rate gap between the perfect aligned and misaligned U$_{n,m}$ is given by\n\\begin{align}\n \\Delta R_{n,m} &\\overset{\\Delta}{=} \\overbar R_{n, m} - \\tilde R_{n, m} \\nonumber \\\\ \n &\\leq \\text{log}_2\\left(1 + \\frac{\\displaystyle \\left(1-\\rho_{n, m}^2\\right)\\kappa_\\text{max}\\left(\\mathbf{S}\\right)+\\sigma^{2}K^{-1}_{N_\\text{BS},m}N_\\text{BS}^{-1}N_\\text{U}^{-1}\\left|\\beta_{n,m}\\right|^{-2}}{\\rho_{n, m}^2 K^{-1}_{N_\\text{BS},1}\\kappa_\\text{min}(\\mathbf{F})\\displaystyle \\sum_{k=1}^{m-1}P_{n, k}}\\right).\n\\end{align}\n\\end{theorem}\n\\begin{proof}\nPlease see Appendix~\\ref{app:theorem3}.\n\\end{proof}\nThe upper bound in Theorem~\\ref{theo:3} explicitly shows the effect of the parameters of HB-NOMA system on the rate performance. A low misalignment factor can substantially increase the rate gap. \n\n\\begin{remark}\n\\normalfont\nIn Section~\\ref{algorithm} the users are assumed to have LoS channels and to be perfectly aligned in a same direction. Particularly, Eq.~(\\ref{eq121}) orders the users with respect to the their effective channel. Actually, these effective channels are the strongest path between the BS and users. However, when the users are not aligned in the same direction, the effective channels are not necessarily the strongest. This is because the users have to orient their antenna array response vector toward the beam direction of the first user rather than the best direction. Hence, to properly perform SIC, we revise the ordering considering the misalignment effective channel, i.e.,\n\\begin{equation}\n \\norm[\\big]{\\tilde{\\mathbf{h}}_{n, 1}} \\geq \\norm[\\big]{\\tilde{\\mathbf{h}}_{n, 2}}\\geq \\dots\n \\geq \\norm[\\big]{\\tilde{\\mathbf{h}}_{n, M_n}}, \\quad \\text{for} \\quad n = 1, 2, \\dots, N.\n\\end{equation}\nFurther, in~(\\ref{equPowerAllocation}) the aligned effective channel should be replaced by the misaligned effective channel. \n\\end{remark}\n\\section{Numerical Results}\\label{sec:simulation}\nIn this section we simulate the HB-NOMA system regarding the various design parameters to confirm the analytical derivations in Theorems~\\ref{theo:1}-\\ref{theo:3}. For simulations, since large scaling fading and path loss put more restriction on mmWave systems, the small scale fading is negligible. The defualt number of antennas $N_\\text{BS}$ $N_\\text{MU}$ for the BS and all users is assumed 32 and 8, respectively, unless it is mentioned. The misalignment is described as a random variable uniformly distributed by parameter $b$, i.e., $\\varphi_{n,1}-\\varphi_{n,m} \\in [-b, b]$. We first present the results of the HB-NOMA with perfect alignment. Then, the effect of misalignment on the rate performance is shown. Finally, the sum-rate of HB-NOMA with OMA is illustrated. \n\\subsection{Perfect Beam Alignment}\nFigure~\\ref{fig:perfect} studies the performance of the derived bound in Theorem~\\ref{theo:1} for aligned users. The users are not affected by the inter-cluster interference from other clusters. It is supposed that the number of users is two and channel gain of the strong and weak user is 0 and -2 dB, respectively. Fig.~\\ref{fig:perfect}(a) reveals that the HB-NOMA approximately achieves the rate the same as that of fully-digital beamforming (FD beamforming) for a wide range of SNR. In particular, a small gap between the exact value of HB-NOMA and the lower bound is observed for the strong user (U$_{1,1}$). This is because the complicated expression of the noise term in~(\\ref{eq14}) is replaced by a simple but greater term. For the weak user (U$_{1,2}$) the bound is very tight due to two reasons. First, in the SINR of the weak user, the noise term is dominated by the interference term. Therefore, the effect of noise term is neglected. Second, the interference term is modeled very accurately.\\\\\nFig.~\\ref{fig:perfect}(b) studies the achievable rate for various $N_\\text{BS}$. For small $N_\\text{BS}$s, the fully-digital outperforms the HB-NOMA. When $N_\\text{BS}$ is samll, the RF precoder is not able to steer a highly direct beam toward the users. By increasing $N_\\text{BS}$, the beam becomes narrow and the users capture much more power. Again, for the weak user, the lower bound is accurate at all $N_\\text{BS}$ regions. For the strong user, the bound does not approach to the exact value but, for $N_\\text{BS}>60$, the bound is approximately the same as to the exact HB-NOMA. \n\n\n \\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\raisebox{-\\height}{\\includegraphics[scale=0.6]{image\/Perfect_ARVsSNR.eps}}\n \\caption{}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\raisebox{-\\height}{\\includegraphics[scale=0.6]{image\/Perfect_ARVsNBS.eps}}\n \\caption{}\n \\end{subfigure}\n \\caption{Evaluation of rate performance of the strong channel-based precoder in HB-NOMA with perfect alignment (LoS channels) in terms of (a) SNR and (b) $N_\\text{BS}$.}\n \\label{fig:perfect}\n\\end{figure}\n \n\\subsection{Beam Misalignment}\n \n\\begin{figure}\n \\centering\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\raisebox{-\\height}{\\includegraphics[scale=0.6]{image\/Imperfect_ARVsSNR.eps}}\n \\caption{}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\raisebox{-\\height}{\\includegraphics[scale=0.6]{image\/Imperfect_ARVsUserIndex.eps}}\n \\caption{}\n \\end{subfigure}\n \n \n \n \\begin{subfigure}[t]{.45\\textwidth}\n \\centering\n \\raisebox{-\\height}{\\includegraphics[scale=0.6]{image\/Imperfect_SRVsMn.eps}}\n \\caption{}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[t]{.45\\textwidth}\n \\centering\n \\raisebox{-\\height}{\\includegraphics[scale=0.6]{image\/Imperfect_GapRateVsUserIndex.eps}}\n \\caption{}\n \\end{subfigure}\n \\caption{Evaluation of the misalignment on the rate performance of HB-NOMA versus (a) SNR, (b) user index, and (c) number of users per cluster ($M_n$). Also, (d) demonstrates the rate gap among the different misaligned users.}\n \\label{fig:imperfect}\n\\end{figure}\n\nThe beam misalignment effect is depicted by Fig.~\\ref{fig:imperfect}. We consider five clusters in which $\\varphi_{1,1}=10^\\circ$, $\\varphi_{2,1}=30^\\circ$, $\\varphi_{3,1}=50^\\circ$, $\\varphi_{4,1}=65^\\circ$, and $\\varphi_{5,1}=80^\\circ$. All simulations have been done for the middle cluster (third cluster) which is likely imposed the same interference from all the other clusters. Also, the channel gain of the strongest user is 0 dB and the next user's gain drops 1 dB. For instance, the channel gain of U$_{n,m}$ is $-(m-1)$ dB. Fig.~\\ref{fig:imperfect}(a), (b), and (d) the number of users in the third cluster is 10.\n\nIn Fig.~\\ref{fig:imperfect}(a) the achievable rate of two misaligned users U$_{3,2}$ (the strong user) and U$_{3,10}$ (the weak user) versus SNR is shown where the channel gains are -1 and -9 dB, respectively. The misalignment parameter is assumed $b=3$. The number of users in all the other clusters is equal to five. Two different observations are obtained. Increasing the SNR leads to a larger rate gap between perfectly aligned and the misaligned HB-NOMA for the strong user, whereas for the weak users both HB-NOMAs achieve almost the same rate for all SNRs. This demonstrates that the effect of misalignment on the strong users is greater than the weak users. In other words, the weak users should deal with the intra-cluster interference while the strong users should deal with the inter-cluster interference. The other observation is that the lower bound is loose for the strong users but tight for the weak user. The observation indicates that our derived normalized effective channel model in Lemma~\\ref{lemma:2} is precise for those users which are intra-cluster interference limited. That is, our finding is able to exactly model the intra-cluster interference. However, the loose lower bound for the strong user indicates that the inter-cluster interference is a little inaccurate which is due to approximating an $N-1$ dimensional subspace with one dimensional space provided in Appendix~\\ref{app:lemma2}. \\\\\nTo gain more details, we have simulated the achievable rate of all the misaligned users for SNR=15 dB in Fig.~\\ref{fig:imperfect}(b). Also, the number of users in the other clusters is set to 15. The mentioned two observations can be seen from this figure, too. However, for strong user, the rate gap between the perfect HB-NOMA and misaligned HB-NOMA is smaller than that of Fig.~\\ref{fig:imperfect}(a). Another important observation gained form Fig.~\\ref{fig:imperfect}(b) is the impact of the power allocation among the clusters. Based on the proposed power allocation scheme in~(\\ref{equPowerAllocation}), to achieve higher rate, more power is assigned to the other clusters than the third cluster which causes U$_{3,2}$ to achieve the rate 0.91 bits\/s\/Hz. Whereas, for the previous scenario more power is allocated to the third cluster which has more users. Therefore, the rate of U$_{3,2}$ is 0.88 bits\/s\/Hz. This shows that due to the misalignment the strong clusters leads to higher inter-cluster interference. \n\nFig.~\\ref{fig:imperfect}(c) compares the sum-rate performance of all the misaligned users with the perfectly aligned HB-NOMA users. Likewise Fig.~\\ref{fig:imperfect}(b), we set SNR=15 dB and 15 users for all the clusters except the third. The number of users in the third cluster varies from 5 to 35. Notice that the sum-rate is shown only for the misaligned users, e.g., rate of the first user is neglected. By increasing the number of users, the allocated power to the cluster increases. In consequence, the total rate increases. However, the difference between the aligned and misaligned HB-NOMA becomes worse. Although more users in a cluster means more power is allocated to, the number of users which have inter-cluster interference limited increases as well. As a result, it brings about higher rate lost. Indeed, by making the misalignment parameter worse ($b$=6), the rate lost becomes bigger. It can be concluded that to avoid higher rate lost, HB-NOMA needs to schedule equal number of users per cluster to serve. \n\nThe upper bound evaluation for gap rate between the perfect alignment and misalignment is demonstrated by Fig.~\\ref{fig:imperfect}(d). The number of users in other clusters is 5 or 15. For SNR=30 dB and $b$=3, the gap is not substantial and the bound is close to the actual value. When $b$ becomes larger, the gap between the stronger users is bigger than the weaker users. When number of the users of the other cluster increases and simultaneously SNR is reduced, only the stronger users' gap increases. To clarify, for U$_{3,2}$ to U$_{3,5}$, the gap becomes larger, while for the remaining users it is unchanged. The bounds for $b$=6 are not very close to the exact rate gap curves. The main reason is that in the deriving process of the bound in the second line of~(\\ref{eq20}) in Appendix~\\ref{app:theorem3}, the effect of the inter-cluster interference term is skipped. However, for high misalignment values the interference is considerable. This causes the extracted bound to be less accurate for higher misalignment. \n\n\\begin{figure}[t] \\includegraphics[scale=.6]{image\/NOMAVsOMA.eps}\n\\centering\n \\caption{Sum-rate comparison of the three different systems. The fully-digital and hybrid beamforming systems serve the users using NOMA. The analog system supports the users by exploiting OMA.}\n \\label{fig:nomaoma} \n\\end{figure}\n\nOur HB-NOMA is compared with the traditional OMA technique in Fig.~\\ref{fig:nomaoma}. We choose TDMA for OMA. To gain some insights, three different mmWave systems is evaluated. These systems are fully-digital beamforming, hybrid beamforming and analog beamforming. For fully-digital we assume $N_\\text{BS}=N_\\text{RF}$=32 which serve 8 clusters. Likewise, for hybrid beamforming we have $N_\\text{BS}$=32 but $N_\\text{RF}$=8. Both fully-digital and hybrid systems support 8 clusters of users. The first cluster has AoD of $10^\\circ$ and AoD of the next clusters increases by $10^\\circ$. Further, the users inside of each clusters are distributed in a way that the maximum channel gain difference between the strongest and weakest user is 18 dB. Indeed, the channel gain of the strongest user is 0 dB. The first cluster contains 4 users and each next cluster serves two users more than the previous cluster. Totally, thanks to NOMA technique, both systems support 88 users in each time slot. For OMA, we assume the analog beamforming system equipped with only one RF chain is able to serve one user per time slot. For U$_{n,m}$, the achievable rate of OMA is $\\text{log}_2(1+P|\\mathbf{w}_{n,m}\\mathbf{H}_{n,m}\\mathbf{f}_\\text{RF}|^2\/\\sigma^2)$. As expected fully-digital NOMA system achieves the highest sum-rate performance. The HB-NOMA with perfect alignment achieves approximately the same rate as the full-digital. For $b$=2, the misaligned HB-NOMA shows a very close performance to the perfect HB-NOMA. By increasing $b$, the performance slightly decreases. There is a huge rate difference between HB-NOMA and OMA. We conclude that, even in the presence of misalignment, HB-NOMA outperforms OMA. \n\n \\section{Conclusion}\\label{sec:conclusion}\nA hybrid beamforming-based NOMA has been designed for the downlink of a single-cell mmWave communication system. To study the achievable rate of an HB-NOMA user, \nwe first formulated an optimization problem for the sum-rate of all users in the cell and then proposed an algorithm to solve it in three steps based on the strongest user precoder design. In order to evaluate the sum-rate, we found a lower bound for the achievable rate of each user under perfect and imperfect beam alignment between the effective channel of the users in each cluster. The lower bound analysis demonstrates that perfect HB-NOMA achieves a sum-rate close to that with fully-digital precoder. For the imperfect correlation, the relationship between the effective channels of the first user and other users inside a cluster was modeled. The bound for the misalignment shows that it is highly function of the mislaigned angle. Such that, a large misalignment angle can cause a significant reduction in the achievable rate. Further, for each user, the rate gap between the perfect and imperfect alignment is bounded. The simulation results confirmed our findings. \n\\appendices\n\n\\section{Proof of Theorem~\\ref{theo:1}}\\label{app:Theorem1}\n\\begin{proof}\nGiven the perfect alignment assumption and (\\ref{eq9}), the effective channel vector for U$_{n, m}$ becomes\n\\begin{align}\\label{eq141}\n\\overbar{\\mathbf{h}}_{n, m}^\\dagger &= \\sqrt{N_\\text{BS}N_\\text{U}}\\beta_{n, m}\\mathbf{a}_\\text{BS}^\\dagger(\\varphi_{n, m})\\mathbf{F}_\\text{RF} = \\beta_{n, m}\\beta_{n, 1}^{-1}\\overbar{\\mathbf{h}}^\\dagger_{n, 1}.\n\\end{align}\nOn the other hand, we have\n\\begin{equation}\\label{eq142}\n\\overbar{\\mathbf{h}}^\\dagger_{n, 1}\\mathbf{f}^\\ell_\\text{BB} =\n\\begin{cases}\n\\boldsymbol{\\Gamma}_{n, n}, \\quad \\text{for} \\ n, \\ell = 1, 2, \\dots, N, \\\\\n0, \\quad \\quad \\text{ for} \\ \\ell \\neq n.\n\\end{cases}\n\\end{equation}\nTherefore, using~(\\ref{eq141}) and~(\\ref{eq142}) the numerator in~(\\ref{eq6}) becomes\n\\begin{equation}\\label{eq143}\n P_{n, m}\\left|\\beta_{n, m}\\right|^2\\left|\\beta_{n, 1}\\right|^{-2}\\mathbf{\\Gamma}_{n, n}^2.\n\\end{equation}\nAlso, the intra-cluster interference in (\\ref{eq61}) becomes $I_\\text{intra}^{n, m} = \\sum_{k = 1}^{m-1}P_{n, k}\\left|\\beta_{n, m}\\right|^2\\left|\\beta_{n, 1}\\right|^{-2}\\mathbf{\\Gamma}_{n, n}^2,$\nand the inter-cluster interference term becomes zero, i.e., $I_\\text{inter}^{n, m} = 0.$\n\n\\noindent Now, substituting~(\\ref{eq143}), and the determined $I_\\text{intra}^{n,m}$ and $I_\\text{inter}^{n,m}$ in~(\\ref{eq6}) gives\n\\begin{align}\\label{eq15}\n \\overbar{R}_{n, m} & = \\text{log}_2\\left(1 + \\frac{P_{n, m}\\left|\\beta_{n, m}\\right|^2\\left|\\beta_{n, 1}\\right|^{-2}\\mathbf{\\Gamma}_{n, n}^2}{\\displaystyle\\sum_{k = 1}^{m-1}P_{n, k}\\left|\\beta_{n, m}\\right|^2\\left|\\beta_{n, 1}\\right|^{-2}\\mathbf{\\Gamma}_{n, n}^2 + \\sigma^2}\\right) \\nonumber \\\\\n & \\overset{(a)}{=} \\text{log}_2\\left(1 + \\frac{P_{n, m}N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2}{\\displaystyle\\sum_{k = 1}^{m-1}P_{n, k}N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2 + \\sigma^2\\left(\\mathbf{F}^{-1}\\right)_{n, n}}\\right)\\nonumber \\\\\n &\\overset{(b)}{\\geq} \\text{log}_2\\left(1 + \\frac{P_{n, m}N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2}{\\displaystyle\\sum_{k = 1}^{m-1}P_{n, k}N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2 +\\sigma^2 \\kappa_\\text{min}^{-1}(\\mathbf{F})}\\right), \n\\end{align}\n($a$) follows by plugging~(\\ref{eq12}) into the expression in the first line of (\\ref{eq15}) and using simple manipulations. To get ($b$), we note that $\\mathbf{F}_\\text{RF}$ is full-rank matrix which means $\\mathbf{F}=\\mathbf{F}_\\text{RF}\\mathbf{F}_\\text{RF}^\\dagger$ is positive definite. Then, we have\n$\\left({{{\\mathbf{F}}}}^{-1}\\right)_{n,n}\\leq\\kappa_\\text{max}\\left({{{\\mathbf{F}}}}^{-1}\\right)=\\kappa_\\text{min}^{-1}\\left({{{\\mathbf{F}}}}\\right)$ in which $\\kappa_\\text{max}(\\cdot)$ and $\\kappa_\\text{min}(\\cdot)$ denote the maximum and minimum eigenvalues of $(\\cdot)$. \n\\end{proof}\n\n\\section{Proof of Lemma~1}\\label{app:lemma2}\n \\begin{proof}\nSuppose that the effective channel vectors are fed back by using infinite-resolution codebooks. Also, let $\\hat{\\mathbf{h}}_{n, m}$ denote the normalized effective channel vector for U$_{n, m}$, i.e., \n\\begin{equation}\\label{eq40}\n\\hat{\\tilde{\\mathbf{h}}}_{n, m} = \\frac{\\tilde{\\mathbf{h}}_{n, m}}{\\norm{\\tilde{\\mathbf{h}}_{n, m}}}.\n\\end{equation}\n\nThe angle between two complex-valued vectors $\\tilde{\\mathbf{h}}_{n,m}$ and $ \\tilde{\\mathbf{h}}_{n,1} \\in V_\\mathbb{C}$, denoted by $\\Phi_\\text{C}$, is obtained as $\n\\text{cos}\\Phi_\\text{C}\\overset{\\Delta}{=} \\rho_{n,m} e^{j\\omega_{n,m}} = \\hat{\\tilde{\\mathbf{h}}}_{n, 1}^{\\dagger}\\hat{\\tilde{\\mathbf{h}}}_{n, m},$ where $(\\rho_{n,m}\\leq 1)$ is equal to\n$\\rho_{n,m} = \\text{cos}{\\Phi}_\\text{H}(\\hat{\\tilde{\\mathbf{h}}}_{n, 1}, \\hat{\\tilde{\\mathbf{h}}}_{n, m}) = \\left|\\hat{\\tilde{\\mathbf{h}}}_{n, 1}^{\\dagger}\\hat{\\tilde{\\mathbf{h}}}_{n, m}\\right|,$\nin which $\\Phi_\\text{H}(\\hat{\\tilde{\\mathbf{h}}}_{n, 1}, \\hat{\\tilde{\\mathbf{h}}}_{n, m})$, $0\\leq \\Phi_\\text{H}\\leq\\frac{\\pi}{2}$, is the Hermitian angle between two complex-valued vectors $\\tilde{\\mathbf{h}}_{n,1}$ and $\\tilde{\\mathbf{h}}_{n,m}$ and $\\omega_{n,m}$, $-\\pi\\leq\\omega_{n,m}\\leq\\pi$, is called their pseudo-angle~\\cite{scharnhorst2001angles}. The factor $\\rho_{n,m}$ describes the angle between the two lines in the complex-valued vector space $V_\\mathbb{C}$~\\cite{scharnhorst2001angles}.\n\nTo ease the analysis, the angle $\\omega_{n,m}$ is neglected~\\cite{scharnhorst2001angles}. Hence, we find the angle between two lines which are defined by the two vectors $\\hat{\\tilde{\\mathbf{h}}}_{n, 1}$ and $\\hat{\\tilde{\\mathbf{h}}}_{n, m}$. Considering these two vectors as two lines in the space $V_\\mathbb{C}$ would be optimistic. However, the simulation results reveal that the derived misalignment model is still effective. Such that, the extracted lower bound for the sum-rate using the misalignment model is close to the exact value of the sum-rate. \n\n\nFor $\\ell=n$, the misalignment factor $\\rho_{n,m}$ can be calculated as\n\\begin{align}\\label{eqA1}\n \\rho_{n,m} \\overset{\\Delta}{=} \\left|\\hat{\\tilde{\\mathbf{h}}}_{n, 1}^{\\dagger}\\hat{\\tilde{\\mathbf{h}}}_{n, m}\\right| &\\overset{(a)}{=}\n \\frac{N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\beta_{n,1}\\mathbf{a}_\\text{BS}^\\dagger(\\varphi_{n, m})\\mathbf{F}_\\text{RF}\\mathbf{F}_\\text{RF}^\\dagger\\mathbf{a}_\\text{BS}(\\varphi_{n, 1}) \\right|}{\\norm{\\tilde{\\mathbf{h}}_{n, m}}\\norm{\\tilde{\\mathbf{h}}_{n, 1}}} \\nonumber \\\\\n &\\overset{(b)}{=}\n \\frac{N_\\text{BS}N_\\text{U}\\left|\\beta_{n,m}\\beta_{n,1}\\mathbf{a}_\\text{BS}^\\dagger(\\varphi_{n, m})\\mathbf{V}_1\\mathbf{\\Lambda}_1\\mathbf{V}_1^\\dagger\\mathbf{a}_\\text{BS}(\\varphi_{n, 1})\\right|}{\\norm{\\tilde{\\mathbf{h}}_{n, m}}\\norm{\\tilde{\\mathbf{h}}_{n,1}}}\\nonumber \\\\\n &\\overset{(c)}{=}\n \\frac{\\displaystyle\\sum_{i=1}^N\\kappa_i\\left|\\mathbf{a}_\\text{BS}^\\dagger(\\varphi_{n, m})\\mathbf{v}_1^i\\mathbf{v}_1^{i\\dagger}\\mathbf{a}_\\text{BS}(\\varphi_{n, 1})\\right|}{\\sqrt{\\displaystyle\\sum_{\\ell = 1}^N K_{N_\\text{BS}}\\left(\\varphi_{\\ell, 1}-\\varphi_{n,m}\\right)}\\sqrt{\\displaystyle\\sum_{\\ell = 1}^N K_{N_\\text{BS}}\\left(\\varphi_{\\ell, 1}-\\varphi_{n,1}\\right)}}.\n\\end{align}\nTo get $(a)$, the expression in~(\\ref{eq9}) is used. To get (b), we apply SVD to the Hermitian matrix $\\mathbf{F}_\\text{RF}\\mathbf{F}_\\text{RF}^\\dagger$ which gives $\\mathbf{F}_\\text{RF}\\mathbf{F}_\\text{RF}^\\dagger=\\mathbf{V}\\mathbf{\\Lambda}\\mathbf{V}^\\dagger$ where $\\mathbf{V}$ of size $N_\\text{BS}\\times N_\\text{BS}$ is a unitary matrix and $\\mathbf{\\Lambda}$ of size $N_\\text{BS}\\times N_\\text{BS}$ is a diagonal matrix of singular values ordered in decreasing order. We then partition two matrices $\\mathbf{V}$ and $\\mathbf{\\Lambda}$ as \n\\begin{equation}\n \\mathbf{V}=\\begin{bmatrix}\n \\mathbf{V}_1 & \\mathbf{V}_2\n \\end{bmatrix}\n , \\quad \\mathbf{\\Lambda}=\\begin{bmatrix}\n \\mathbf{\\Lambda}_1 & \\mathbf{0} \\\\\n \\mathbf{0} & \\mathbf{0}\n \\end{bmatrix},\n\\end{equation}\nwhere $\\mathbf{V}_1$ is of size $N_\\text{BS}\\times N$ and $\\mathbf{\\Lambda}_1$ and is of size $N\\times N$. We note that rank($\\mathbf{F}_\\text{RF}$)$=N$. Term ($c$) follows from the fact that $\\mathbf{\\Lambda}_1$ is a diagonal matrix with elements $\\kappa_i$ for $i=1, 2, \\dots, N$. Notice that $\\mathbf{v}_1^i$ represents the $i$th column.\n\nFor $\\ell\\neq n$, it is reasonable to assume that $\\sqrt{1-\\rho_{n,m}^2}$ percentage of the amplitude of $\\tilde{\\mathbf{h}}_{n,m}$ leakages into the subspace generated by the other first users. To determine the subspace, we start with considering the impact of the misalignment imposed by the other first users on U$_{n,m}$, i.e, $\\displaystyle {\\sum_{\\ell=1,\\ell\\neq n}^N\\left|{\\tilde{\\mathbf{h}}}_{\\ell, 1}^{\\dagger}{\\tilde{\\mathbf{h}}}_{n, m}\\right|^2}$.\nUsing the definition of vector norm, we rewrite this expression as following: \n\\begin{align}\n \\sum_{\\ell=1,\\ell\\neq n}^N\\left|{\\tilde{\\mathbf{h}}}_{\\ell, 1}^{\\dagger}{\\tilde{\\mathbf{h}}}_{n, m}\\right|^2&=\\norm[\\Big]{{\\tilde{\\mathbf{h}}}_{n, m}^\\dagger\\begin{bmatrix}{\\tilde{\\mathbf{h}}}_{1, 1} & \\cdots & {\\tilde{\\mathbf{h}}}_{n-1, 1} & {\\tilde{\\mathbf{h}}}_{n+1, 1} & \\cdots & {\\tilde{\\mathbf{h}}}_{N, 1}\n \\end{bmatrix}}^2\\nonumber \\\\\n &\\overset{(a)}{=}{N_\\text{BS}N_\\text{U}}\\norm[\\Big]{{\\tilde{\\mathbf{h}}}_{n, m}^\\dagger\\mathbf{F}_\\text{RF}^\\dagger\n \\bigl[\\beta_{1,1}\\mathbf{a}_\\text{BS}\\left(\\varphi_{1, 1}\\right) \\text{ } \\cdots \\text{ } \\beta_{n-1,1}\\mathbf{a}_\\text{BS}\\left(\\varphi_{n-1, 1}\\right) \\nonumber \\\\ \n & \\qquad \\qquad \\qquad \\qquad \\qquad \\beta_{n+1,1}\\mathbf{a}_\\text{BS}\\left(\\varphi_{n+1, 1}\\right) \\text{ } \\cdots \\text{ } \\beta_{N,1}\\mathbf{a}_\\text{BS}\\left(\\varphi_{N, 1}\\right)\n \\bigr]}^2\\nonumber \\\\\n &\\overset{(b)}{=}{N_\\text{BS}N_\\text{U}}\\norm[\\Big]{{\\tilde{\\mathbf{h}}}_{n, m}^\\dagger\\mathbf{F}_\\text{RF}^\\dagger\\mathbf{A}_\\text{BS}^{-n}}^2.\n\\end{align}\nTo get ($a$), we replace $\\tilde{\\mathbf{h}}_{\\ell,1}$ by~(\\ref{eq9}). Since $\\mathbf{a}_\\text{BS}\\left(\\varphi_{n,1}\\right)$s are independent vectors, $\\mathbf{G}_\\text{BS}^{-n}=\\sqrt{N_\\text{BS}N_\\text{U}}\\mathbf{F}_\\text{RF}^\\dagger\\mathbf{A}_\\text{BS}^{-n}$ determines an $N-1$ dimensional subspace. We represent the weighted linear combination of $\\hat{\\tilde{\\mathbf{h}}}_{\\ell, 1}^{\\dagger}$ by a new vector $\\mathbf{g}_\\text{BS}^{-n}$ which is located in the subspace $\\mathbf{G}_\\text{BS}^{-n}$. So, we get $\\mathbf{g}_\\text{BS}^{-n}=\\sqrt{N_\\text{BS}N_\\text{U}}\\mathbf{F}_\\text{RF}\\times \\displaystyle\\sum_{\\ell=1,\\ell\\neq n}^N\\sqrt{P_\\ell}\\beta_{\\ell,1}\\mathbf{a}_\\text{BS}(\\varphi_{\\ell,1})$. To get~(\\ref{eq19}), we only need to normalize $\\mathbf{g}_\\text{BS}^{-n}$. \n\\end{proof}\n\n\\section{Proof of Theorem~2}\\label{app:theorem2}\n\\begin{proof}\nUsing~(\\ref{eq19}), we obtain the following expressions. First, \n\\begin{align}\\label{eq191}\n\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2 &= \\rho_{n, m}^2\\norm[\\Big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\left|\\hat{\\tilde{\\mathbf{h}}}^{\\dagger}_{n, 1}\\mathbf{f}_\\text{BB}^n\\right|^2 + \\left(1 - \\rho_{n, m}^2\\right)\\norm[\\Big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\left|\\mathbf{g}^{-n\\dagger}_\\text{BS}\\mathbf{f}_\\text{BB}^n\\right|^2 \\nonumber \\\\\n& \\overset{(a)}{=} \\rho_{n, m}^2\\norm[\\Big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\left|\\hat{\\tilde{\\mathbf{h}}}^{\\dagger}_{n, 1}\\mathbf{f}_\\text{BB}^n\\right|^2 \\overset{(b)}{=} \\rho_{n, m}^2\\norm[\\Big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\norm[\\Big]{\\tilde{\\mathbf{h}}_{n, 1}}^{-2}\\boldsymbol{\\Gamma}_{n, n}^2, \n\\end{align}\nin which (a) follows since $\\mathbf{g}^{-n\\dagger}_\\text{BS}\\mathbf{f}_\\text{BB}^n = 0$ and (b) follows from~(\\ref{eq142}). Second, \n\\begin{equation}\\label{eq192}\n\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}\\mathbf{f}_\\text{BB}^\\ell\\right|^2 =\n\\left(1-\\rho_{n,m}^2\\right)\\norm[\\Big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\Big|\\hat{\\mathbf{g}}^{-n\\dagger}_\\text{BS}\\mathbf{f}_\\text{BB}^\\ell\\Big|^2, \\quad \\text{for} \\quad \\ell \\neq n.\n\\end{equation}\nNext, Using~(\\ref{eq12}), ~(\\ref{eq142}), ~(\\ref{eq1601}), ~(\\ref{eq40}), and~(\\ref{eq191}), ~(\\ref{eq61}) becomes\n\\begin{align}\\label{eq30}\nI_\\text{intra}^{n, m} =& \\sum_{k = 1}^{m-1}P_{n, k}\\rho_{n, m}^2N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2\\left(\\mathbf{F}^{-1}\\right)_{n, n}^{-1}K_{N_\\text{BS}, m}K_{N_\\text{BS}, 1}^{-1}, \n\\end{align}\nwhere $K_{N_\\text{BS}, 1}$ and $K_{N_\\text{BS}, m}$ are defined in~(\\ref{eq163}).\nLikewise,using~(\\ref{eq142}), ~(\\ref{eq1601}), ~(\\ref{eq40}), and~(\\ref{eq192}), ~(\\ref{eq62}) becomes\n\\begin{align}\\label{eq31}\nI_\\text{inter}^{n, m} = & \\left(1-\\rho_{n,m}^2\\right)N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2\\sum_{\\ell \\neq n}^NP_\\ell\\left|\\hat{\\mathbf{g}}_\\text{BS}^{-n\\dagger}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2K_{N_\\text{BS}, m}.\n\\end{align}\nFurther, after substituting~(\\ref{eq191}), ~(\\ref{eq30}) and~(\\ref{eq31}) into~(\\ref{eq6}), we get\n\\begin{align}\\label{eq32}\n \\tilde R_{n, m} &= \\text{log}_2\\left(1+ \\frac{\\Psi}{I_\\text{intra}^{n, m}+ I_\\text{inter}^{n, m} + \\sigma^2}\\right)\\overset{(a)}{\\geq} \\text{log}_2\\left(1+ \\frac{\\Psi}{I_\\text{intra}^{n, m}+ \\varsigma_\\text{inter}^{n, m} + \\sigma^2}\\right), \n\\end{align}\nwhere $\\Psi =P_{n, m}\\rho_{n,m}^2N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2\\left(\\mathbf{F}^{-1}\\right)_{n, n}^{-1} K_{N_\\text{BS}, m}K_{N_\\text{BS}, 1}^{-1},$\nand $\\varsigma_\\text{inter}^{n, m} = \\left(1-\\rho_{n,m}^2\\right)N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2 \\times\n\\kappa_\\text{max}(\\mathbf{S}) K_{N_\\text{BS}, m}$. To get (a), we have the following lemma.\n\n\\begin{lemma}\\label{lemma:3}\n\\normalfont\nAn upper bound of\n$\\displaystyle\\sum_{\\ell=1, \\ell \\neq n}^NP_\\ell\\left|\\hat{\\mathbf{g}}_\\text{BS}^{-n\\dagger}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2$ is the maximum eigenvalue of $\\mathbf{S}$, i.e., $\\kappa_\\text{max}(\\mathbf{S})$.\n\\end{lemma}\n\\begin{proof}\nWe rewrite $\\displaystyle\\sum_{\\ell=1, \\ell \\neq n}^NP_\\ell\\left|\\hat{\\mathbf{g}}_\\text{BS}^{-n\\dagger}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2 = \\norm[\\big]{\\mathbf{g}_\\text{BS}^{-n\\dagger}\\mathbf{F}_\\text{BB}^{-n,W}}^2_2$. Maximizing $\\norm[\\big]{\\hat{\\mathbf{g}}_\\text{BS}^{-n\\dagger}\\mathbf{F}_\\text{BB}^{-n,W}}^2_2$ given $\\norm[\\big]{\\hat{\\mathbf{g}}_\\text{BS}^{-n}} = 1$ is similar to maximizing a beamforming vector for maximum ratio transmission systems~\\cite{love2003grassmannian, dighe2003analysis}. Hence, the maximum value of $\\hat{\\mathbf{g}}_\\text{BS}^{-n}$ is the dominant right singular vector of $\\mathbf{F}_\\text{BB}^{-n,W}$~\\cite{love2003grassmannian, dighe2003analysis}. Thus, the maximum of $\\norm[\\big]{\\hat{\\mathbf{g}}_\\text{BS}^{-n\\dagger}\\mathbf{F}_\\text{BB}^{-n,W}}^2_2$ is equal to the maximum eigenvalue of $\\mathbf{S}$.\n\\end{proof}\nLemma~\\ref{lemma:3} indicates that $I_\\text{inter}^{n, m} \\leq \\varsigma_\\text{inter}^{n, m}$. After some manipulations\n\\begin{align}\\label{eq321}\n \\tilde R_{n, m} & {\\geq} \\text{log}_2\\left(1+ \\frac{P_{n, m}\\rho_{n,m}^2N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2}{\\zeta_\\text{intra}^{n, m}+ \\left(\\varsigma_\\text{inter}^{n, m}+ \\sigma^2\\right) \\left(\\mathbf{F}^{-1}\\right)_{n, n}K^{-1}_{N_\\text{BS}, m}K_{N_\\text{BS}, 1}}\\right)\\nonumber \\\\\n &\\overset{(a)}{\\geq}\\text{log}_2\\left(1+ \\frac{P_{n, m}\\rho_{n, m}^2N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2}{\\zeta_\\text{intra}^{n, m}+ \\zeta_\\text{inter}^{n, m} + \\sigma^2\\kappa_\\text{min}^{-1}(\\mathbf{F})K^{-1}_{N_\\text{BS}, m}K_{N_\\text{BS}, 1}}\\right), \n\\end{align}\nwhere in the first line, $\\zeta_\\text{intra}^{n, m} = \\displaystyle\\sum_{k=1}^{m-1}P_{n, k}\\rho_{n,m}^2N_\\text{BS}N_\\text{U}\\left|\\beta_{n, m}\\right|^2$ and in the second line, $\\zeta_\\text{inter}^{n, m} = \\left(1-\\rho_{n,m}^2\\right)\\times N_\\text{BS} N_\\text{U} \\left|\\beta_{n,m}\\right|^2\\kappa_\\text{max}(\\mathbf{S})\\kappa_\\text{min}^{-1}(\\mathbf{F}) K_{N_\\text{BS},1}$. To get (a), we note that $\\left(\\mathbf{F}^{-1}\\right)_{n,n} \\leq \\kappa_\\text{min}^{-1}(\\mathbf{F})$.\n\\end{proof}\n\n\\section{Proof of Theorem~\\ref{theo:3}}\\label{app:theorem3}\n\\begin{proof}\nWe start with~(\\ref{eq6}) to define the achievable rate of U$_{n, m}$ for the perfect correlation and the imperfect correlation, i.e., $\\overbar{R}_{n, m}$ and $\\tilde{R}_{n, m}$, respectively. This gives\n\\begin{align}\\label{eq20}\n\\Delta R_{n, m} &\\overset{\\Delta}{=} \\overbar R_{n, m} - \\tilde R_{n, m} \\nonumber \\\\\n&=\\text{log}_2\\left(1 + \\frac{P_{n, m}\\left|\\overbar{\\mathbf{h}}^\\dagger_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2}{\\displaystyle \\sum_{k = 1}^{m-1}P_{n, k}\\left|\\overbar{\\mathbf{h}}^\\dagger_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2+\\sigma^2} \\right) - \\nonumber \\\\\n& \\qquad \\qquad \\text{log}_2\\left(1 + \\frac{P_{n, m}\\left|\\tilde{\\mathbf{h}}_{n, m}^{\\dagger}{\\mathbf{f}}_\\text{BB}^n\\right|^2}{ \\displaystyle \\sum_{k=1}^{m-1}P_{n, k}\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^n\\right|^2 + \\sum_{\\ell=1, \\ell\\neq n}^NP_\\ell\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2 + \\sigma^2}\\right)\n\\nonumber \\\\\n& = \\text{log}_2\\left( \\frac{\\displaystyle \\sum_{k = 1}^{m} P_{n, k}\\left|\\overbar{\\mathbf{h}}^\\dagger_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2 + \\sigma^2}{\\displaystyle \\sum_{k = 1}^{m-1}P_{n, k}\\left|\\overbar{\\mathbf{h}}^\\dagger_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2+\\sigma^2} \\right) - \\text{log}_2\\left(\\frac{\\displaystyle \\sum_{k=1}^{m} P_{n, k}\\left|\\tilde{\\mathbf{h}}_{n, m}^{\\dagger}{\\mathbf{f}}_\\text{BB}^n\\right|^2 + \\sum_{\\ell=1, \\ell\\neq n}^NP_\\ell\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2 + \\sigma^2}{ \\displaystyle \\sum_{k=1}^{m-1}P_{n, k}\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^n\\right|^2 + \\sum_{\\ell=1, \\ell\\neq n}^NP_\\ell\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2 + \\sigma^2}\\right) \\nonumber \\\\\n& \\overset{(a)}{\\leq} \\text{log}_2\\left( \\frac{\\displaystyle \\sum_{k = 1}^{m} P_{n, k}\\left|\\overbar{\\mathbf{h}}^\\dagger_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2 + \\sigma^2}{\\displaystyle \\sum_{k=1}^{m} P_{n, k}\\left|\\tilde{\\mathbf{h}}_{n, m}^{\\dagger}{\\mathbf{f}}_\\text{BB}^n\\right|^2 + \\sigma^2} \\right) - \\text{log}_2\\left(\\frac{\\displaystyle \\sum_{k = 1}^{m-1}P_{n, k}\\left|\\overbar{\\mathbf{h}}^\\dagger_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2+\\sigma^2}{ \\displaystyle \\sum_{k=1}^{m-1}P_{n, k}\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^n\\right|^2 + \\sum_{\\ell=1,\\ell\\neq n}^NP_\\ell\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2 + \\sigma^2}\\right) \\nonumber \\\\\n& \\overset{(b)}{\\leq} \\text{log}_2\\left( \\frac{\\norm[\\big]{\\overbar{\\mathbf{h}}_{n, m}}^2\\left|\\hat{\\overbar{\\mathbf{h}}}^\\dagger_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2}{\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\left|\\hat{\\tilde{\\mathbf{h}}}_{n, m}^{\\dagger}{\\mathbf{f}}_\\text{BB}^n\\right|^2} \\right) - \\text{log}_2\\left(\\frac{\\norm[\\big]{\\overbar{\\mathbf{h}}_{n, m}}^2\\displaystyle \\sum_{k = 1}^{m-1}P_{n, k}\\left|\\hat{\\overbar{\\mathbf{h}}}^\\dagger_{n, m}\\mathbf{f}_\\text{BB}^n\\right|^2+1}{\\Upsilon}\\right),\n\\end{align}\nwhere $\\Upsilon = \\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2 \\displaystyle \\sum_{k=1}^{m-1}P_{n, k}\\left|\\hat{\\tilde{\\mathbf{h}}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^n\\right|^2 + \\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\sum_{\\ell=1, \\ell\\neq n}^NP_\\ell\\left|\\hat{\\tilde{\\mathbf{h}}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2 + \\sigma^2$.\nTo get (a) we remove positive quantity $\\displaystyle \\sum_{\\ell=1, \\ell\\neq n}^NP_\\ell\\left|\\tilde{\\mathbf{h}}^{\\dagger}_{n, m}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2$ from the second term. Then, we exchange the denominator of the first term with the numerator of the second one. (b) follows from the fact that for $u > v$, it gives $\\text{log}\\left(\\frac{u}{v}\\right) > \\text{log}\\left(\\frac{u+c}{v+c}\\right)$ ($c>0$), and applying the normalized vector $\\tilde{\\mathbf{h}}_{n, m}$ defined in~(\\ref{eq40}) for both perfect and imperfect effective channel vectors.\n\nNoting that $\\hat{\\overbar{\\mathbf{h}}}_{n, 1} = \\hat{\\overbar{\\mathbf{h}}}_{n, m}$ and using~(\\ref{eq191}) it yields\n\\begin{align}\\label{eq21}\n\\Delta R\n& \\leq \\text{log}_2\\left(\\frac{\\norm[\\big]{\\overbar{\\mathbf{h}}_{n, m}}^2}{\\rho_{n, m}^2\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2}\\right) - \\text{log}_2\\left( \\displaystyle \\sum_{k=1}^{m-1}P_{n, k} \\norm[\\big]{\\overbar{\\mathbf{h}}_{n, m}}^2\\left|\\hat{\\tilde{\\mathbf{h}}}_{n, 1}^\\dagger\\mathbf{f}_\\text{BB}^n\\right|^2+\\sigma^2\\right) \\nonumber \\\\\n& \\quad + \\text{log}_2\\Bigg(\\sum_{k = 1}^{m-1}P_{n, k}\\rho_{n, m}^2\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\left|\\hat{\\tilde{\\mathbf{h}}}_{n, 1}^\\dagger{\\mathbf{f}}_\\text{BB}^n\\right|^2 + (1 - \\rho_{n, m}^2)\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\sum_{\\ell=1, \\ell\\neq n}^NP_\\ell\\left|\\hat{\\mathbf{g}}^{-n\\dagger}_\\text{BS}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2 + \\sigma^2 \\Bigg) \\nonumber \\\\\n& \\overset{(a)}{=} - \\text{log}_2\\left( \\displaystyle \\sum_{k=1}^{m-1}P_{n, k}\\rho_{n, m}^2 \\left|\\hat{\\tilde{\\mathbf{h}}}_{n, 1}^\\dagger\\mathbf{f}_\\text{BB}^n\\right|^2\\right)\\nonumber \\\\\n&\\quad + \\text{log}_2\\Bigg(\\sum_{k = 1}^{m-1}P_{n, k}\\rho_{n, m}^2\\left|\\hat{\\tilde{\\mathbf{h}}}_{n, 1}^\\dagger{\\mathbf{f}}_\\text{BB}^n\\right|^2 + (1 - \\rho_{n, m}^2)\\sum_{\\ell=1, \\ell\\neq n}^NP_\\ell\\left|\\hat{\\mathbf{g}}^{-n\\dagger}_\\text{BS}{\\mathbf{f}}_\\text{BB}^\\ell\\right|^2 + \\frac{\\sigma^2}{\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^{2}} \\Bigg) \\nonumber \\\\\n& \\overset{(b)}{\\leq} \\text{log}_2\\left(1 + \\frac{\\displaystyle \\left(1-\\rho_{n, m}^2\\right)\\kappa_\\text{max}\\left(\\mathbf{S}\\right)+\\sigma^2\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^{-2}}{\\rho_{n, m}^2 K^{-1}_{N_\\text{BS},1}\\left(\\mathbf{F}^{-1}\\right)^{-1}_{n,n}\\displaystyle \\sum_{k=1}^{m-1}P_{n, k}}\\right)\\nonumber \\\\\n& \\overset{(c)}{\\leq} \\text{log}_2\\left(1 + \\frac{\\displaystyle \\left(1-\\rho_{n, m}^2\\right)\\kappa_\\text{max}\\left(\\mathbf{S}\\right)+\\sigma^{2}K^{-1}_{N_\\text{BS},m}N_\\text{BS}^{-1}N_\\text{U}^{-1}\\left|\\beta_{n,m}\\right|^{-2}}{\\rho_{n, m}^2 K^{-1}_{N_\\text{BS},1}\\kappa_\\text{min}(\\mathbf{F})\\displaystyle \\sum_{k=1}^{m-1}P_{n, k}}\\right),\n\\end{align}\nin which (a) follows by rewriting the first term as $\\text{log}_2\\left(\\rho_{n, m}^{-2}\\norm[\\big]{\\overbar{\\mathbf{h}}_{n, m}}^2\\right) - \\text{log}_2\\left(\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\right)$. Then, we sum up the expression $\\text{log}_2\\left(\\rho_{n, m}^{-2}\\norm[\\big]{\\overbar{\\mathbf{h}}_{n, m}}^2\\right)$ with the second term and the expression $-\\text{log}_2\\left(\\norm[\\big]{\\tilde{\\mathbf{h}}_{n, m}}^2\\right)$ with the third term. To get (b), we again sum up the first term with the second term. We then use Lemma~\\ref{lemma:3} to get $\\kappa_\\text{max}(\\mathbf{S})$ and~(\\ref{eq142}) and~(\\ref{eq12}) to get $K^{-1}_{N_\\text{BS},1}\\left(\\mathbf{F}^{-1}\\right)^{-1}_{n,n}$. To obtain (c), first we use $\\norm[\\big]{\\tilde{\\mathbf{h}}_{n,m}}^2=K_{N_\\text{BS},m}N_\\text{BS}N_\\text{U}|\\beta_{n,m}|^2$. Next we use the inequality $\\left(\\mathbf{F}^{-1}\\right)_{n,n} \\leq \\kappa_\\text{min}^{-1}(\\mathbf{F})$. \n\\end{proof}\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzehvr b/data_all_eng_slimpj/shuffled/split2/finalzzehvr new file mode 100644 index 0000000000000000000000000000000000000000..b986bb2dab48432bf068e9b37c05888f853c2482 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzehvr @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nSocial media are widely used at a global scale. Communication between users from different backgrounds, ideologies, preferences, political orientations, etc. on these platforms can result in tensions and use of offensive and hateful speech. This negative content can be very harmful, sometimes with real-world consequences. For these reasons, it is desirable to control this type of uncivil language behavior by detecting and removing this destructive content. \\\\\n\nAlthough there have been a number of works on detecting offensive and hateful content in English (e.g.~\\cite{agrawal2018deep,badjatiya2017deep,nobata2016abusive}), works on many other languages are either lacking or rare. This is the case for Arabic, where there have been only very few works (e.g., ~\\cite{alakrot2018towards,albadi2018they,mubarak2017abusive,mubarak2019arabic}). For these motivations, we participated in the Offensive Language and hate-speech Detection shared task organized with the 4\\textsuperscript{th} Workshop on Open-Source Arabic Corpora and Processing Tools Arabic (OSACT4).\n\\\\\n\n\n\nOffensive content and hate speech are less frequent online than civil, acceptable communication. For example, only $~19\\%$ and $\\sim 5\\%$ of the released shared task data are offensive and hate speech, respectively. This is the case in spite of the fact that the data seems to have been collected based on trigger seeds that are more likely to accompany this type of harmful content. As such, it is not easy to acquire data for training machine learning systems. For this reason, we direct part of our efforts to automatically augmenting training data released by the shared task organizers (Section ~\\ref{subsec:data_aug}). Our experiments show the utility of our data enrichment method. In addition, we hypothesize trained affective models can have useful representations that might be effective for the purpose of detecting offensive and hateful content. To test this hypothesis, we fine-tune one sentiment analysis model and one emotion detection model on our training data. Our experiments support our hypothesis (Section~\\ref{sec:model}). All our models are based on the Bidirectional Encoder from Transformers (BERT) model. Our best models are significantly better than competitive baseline based on vanilla BERT. Our contributions can be summarized as follows:\n\n\\begin{itemize}\n \\item We present an effective method for automatically augmenting training data. Our method is simple and yields sizable additional data when we run it on a large in-house collection. \n \n \\item We demonstrate the utility of fine-tuning off-the-shelf affective models on the two downstream tasks of offensive and hate speech. \n \n \\item We develop highly accurate deep learning models for the two tasks of offensive content and hate speech detection. \n\\end{itemize}\n\nThe rest of the paper is organized as follows: We introduce related works in Section~\\ref{sec:lit}, shared task data and our datasets in Section~\\ref{sec:data}, our models in Section~\\ref{sec:model}, and we conclude in Section~\\ref{sec:conc}. \n\n\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}\n\\cline{2-9}\n &Dataset& \\textbf{\\#tweets} &\\textbf{\\# NOT\\_OFF} & \\textbf{\\# OFF} & \\textbf{OFF\\%} & \\textbf{\\# NOT\\_HS} & \\textbf{\\# HS} &\\textbf{ HS \\%} \\\\ \\hline\n\\multirow{3}{*}{}Shard-task data & TRAIN & 6994 & 5585 & 1409 & 20\\% & 6633 & 361 & 5\\% \\\\ \n & DEV & 1000 & 821 & 179 & 18\\% & 956 & 44 & 4\\% \\\\ \n & TEST & 2000 & - & - & - & - & - & - \\\\ \\hline\n\\multirow{3}{*}{} Augmented data& AUG-TRAIN-HS & 209780 & - & - & - & 199291 & 10489 & 5\\% \\\\ \n & AUG-TRAIN-OFF & 480777 & 215365 & 265413 & 55\\% & - & - & - \\\\ \\hline\n\\end{tabular}\n\\caption{Offensive (OFF) and Hate Speech (HS) Labels distribution in datasets }\n\\label{tab:distrib_table}\n\\end{table*}\n\n\n\\section{Related Work}\\label{sec:lit}\n\\textbf{Thematic Focus:} Research on undesirable content shows that social media users sometimes utilize \\textit{profane}, \\textit{obscene}, or \\textit{offensive} language~\\cite{jay2008pragmatics,wiegand2018overview}; \\textit{aggression}~\\cite{kumar2018benchmarking,modha2018filtering}; \\textit{toxic content}~\\cite{georgakopoulos2018convolutional,fortuna2018merging,zampieri2019semeval}, and \\textit{bullying}~\\cite{dadvar2013improving,agrawal2018deep,fortuna2018merging}.\\\\\n\n\\textbf{Overarching Applications:} \nSeveral works have taken as their target detecting these types of negative content with a goal to build applications for (1) content filtering or (2) quantifying the intensity of polarization~\\cite{barbera2015follow,conover2011political}, (3) classifying trolls and propaganda accounts that often use offensive language~\\cite{darwish2017seminar}, (4) identifying hate speech that may correlate with hate crimes~\\cite{nobata2016abusive}, and (5) detecting signals of conflict, which are often preceded by verbal hostility~\\cite{chadefaux2014early}.\\\\\n\n\\textbf{Methods:} A manual way for detecting negative language can involve building a list of offensive words and then filtering text based on these words. As ~\\newcite{mubarak2019arabic} also point out, this approach is limited because (1) offensive words are ever evolving with new words continuously emerging, complicating the maintenance of such lists and (2) the offensiveness of certain words is highly context- and genre-dependent and hence a lexicon-based approach will not be very precise. Machine learning approaches, as such, are much more desirable since they are more nuanced to domain and also usually render more accurate, context-sensitive predictions. This is especially the case if there are enough data to train these systems. \\\\\n\nMost work based on machine learning employs a supervised approach at either (1) character level~\\cite{malmasi2017detecting}, (2) word level~\\cite{kwok2013locate}, or (3) simply employ some representation incorporating word embeddings~\\cite{malmasi2017detecting}. These studies use different learning methods, including Naive Bayes~\\cite{kwok2013locate}, SVMs~\\cite{malmasi2017detecting}, and classical deep learning such as CNNs and RNNs~\\cite{nobata2016abusive,badjatiya2017deep,alakrot2018towards,agrawal2018deep}. Accuracy of the aforementioned systems range between 76\\% and 90\\%. It is also worth noting that some earlier works~\\cite{weber2013secular} use sentiment words as features to augment other contextual features. Our work has affinity to this last category since we also leverage affective models trained on sentiment or emotion tasks. Our approach, however, differs in that we build models free of hand-crafted features. In other words, we let the model learn its representation based on training data. This is a characteristic attribute of deep learning models in general.~\\footnote{Of course hand-crafted features can also be added to a representation fed into a deep learning model. However, we do not do this here.} In terms of the specific information encoded in classifiers, researchers use profile information in addition to text-based features. For example, ~\\newcite{abozinadah2017detecting} apply SVMs on 31 features extracted from user profiles in addition to social graph centrality measures. \\\\\n\nMethodologically, our work differs in three ways: (1) we train offensive and hate speech models off affective models (i.e., we fine-tune already trained sentiment and emotion models on both the offensive and hate speech tasks). (2) We apply BERT language models on these two tasks. We also (3) automatically augment offensive and hate speech training data using a simple data enrichment method. \\\\\n\n\n\n\\textbf{Arabic Offensive Content:}\nVery few works have been applied to the Arabic language, focusing on detecting offensive language. For example,~\\cite{mubarak2017abusive} develop a list of obscene words and hashtags using patterns common in offensive and rude communications to label a dataset of 1,100 tweets.~\\newcite{mubarak2019arabic} applied character n-gram FasText model on a large dataset \n(3.3M tweets) of offensive content. Our work is similar to~\\newcite{mubarak2019arabic} in that we also automatically augment training data based on an initial seed lexicon. \\\\\n\n\n\n\n\n\\section{Data}\\label{sec:data}\nIn our experiments, we use two types of data: (1) data distributed by the Offensive Language Detection shared task and (2) an automatically collected dataset that we develop (Section~\\ref{subsec:data_aug}). The shared task dataset comprises 10,000 tweets manually annotated for two sub-tasks: \\textit{offensiveness} (Sub\\_task\\_A)~\\footnote{\\url{https:\/\/competitions.codalab.org\/competitions\/22825}.} and \\textit{hate speech} (Sub\\_task\\_B)~\\footnote{\\url{https:\/\/competitions.codalab.org\/competitions\/22826}}. According to shared task organizers,~\\footnote{\\url{http:\/\/edinburghnlp.inf.ed.ac.uk\/workshops\/OSACT4\/}.}, offensive tweets in the data contain explicit or implicit insults or attacks against other people, or inappropriate language. Organizers also maintain that hate speech tweets contains insults or threats targeting a specific group of people based on the nationality, ethnicity, gender, political or sport affiliation, religious belief, or other common characteristics of such a group. The dataset is split by shared task organizers into 70\\% TRAIN, 10\\% DEV, and 20\\% TEST. Both labeled TRAIN and DEV splits were shared with participating teams, while tweets of TEST data (without labels) was only released briefly before competition deadline.\\\\\n\nIt is noteworthy that the dataset is imbalanced. For \\textit{offensiveness} (Sub\\_task\\_A), only 20\\% of the TRAIN split are labeled as offensive and the rest is not offensive. For \\textit{hate speech} (Sub\\_task\\_B), only 5\\% of the tweets are annotated as hateful. Due to this imbalanced, the official evaluation metric is macro F\\textsubscript{1} score. Table~\\ref{tab:distrib_table} shows the size and label distribution in the shared task data splits.~\\footnote{Table~\\ref{tab:distrib_table} also shows size and class distribution for our automatically extracted dataset, to which we refer to as augmented (AUG).}\\\\ \n\nThe following are example tweets from the shared task TRAIN split.\\\\\n\n\n\\textbf{Examples of \\textit{offensive} and \\textit{hateful} tweets:}\n\n\\begin{enumerate}\n \\setcounter{enumi}{\\value{examples}}\n \\item <\u064a\u0627 \u0631\u0628 \u064a\u0627 \u0648\u0627\u062d\u062f \u064a\u0627 \u0623\u062d\u062f \u0628\u062d\u0642 \u064a\u0648\u0645 \u0627\u0644\u0627\u062d\u062f \u0627\u0646>\\\\\n < \u062a\u0647\u0644\u0643 \u0628\u0646\u064a \u0633\u0639\u0648\u062f \u0627\u0644\u0645\u062c\u0631\u0645\u064a\u0646. \u0644\u0627\u062c\u0644 \u0627\u0637\u0641\u0627\u0644 \u0627\u0644\u064a\u0645\u0646 \u0634\u0627\u0631\u0643\u0648\u0627.>\\\\\n \\textit{Oh my Lord, O One and Only, destroy the family of Sau`d, for they are the criminals who put children of Yemen to suffer.}~\\footnote{Original tweets can be run-on sentences, lack proper grammatical structures or punctuation. In presented translation, for readability, while we maintain the meaning as much as possible, we render grammatical, well-structured sentence.}\\\\\n \\item <\u064a\u0627 \u0644\u0628\u0646\u0627\u0646\u064a \u064a\u0627 \u0641\u0636\u0644\u0627\u062a \u0627\u0644\u0627\u0633\u062a\u0639\u0645\u0627\u0631 \u0627\u0644\u0641\u0631\u0646\u0633\u064a>\\\\\n <\u0627\u0644\u0644\u0628\u0646\u0627\u0646\u064a\u064a \u0628\u0627\u0644\u062e\u0644\u064a\u062c \u064a\u0634\u063a\u0644\u0648\u0646 \u0646\u0633\u0648\u0627\u0646\u0647\u0645 \u0639\u0627\u0647\u0631\u0627\u062a>\n \\\\\n \\textit{Hey, you Lebanese guy, you're the wastes of the French colonizers. The Lebanese in the Gulf put their women in prostitution work.}\n\\setcounter{examples}{\\value{enumi}}\\\\\n\\end{enumerate}\n\n\\textbf{Examples for \\textit{offensive} but \\textit{not hate-speech} tweets:}\\\\\n\n\\begin{enumerate} \n \\setcounter{enumi}{\\value{examples}}\n \n \\item \\<\u064a\u0627 \u0644\u0637\u064a\u0641.. \u064a\u0627 \u0633\u0627\u062a\u0631 ..\u0623\u062d\u0645\u062f\u0648\u0627 \u0631\u0628\u0643\u0645 \u0625\u0646\u0647\u0627 >\\\\\n < \u0645\u0633\u062a\u0642\u0639\u062f\u0629 \u0643\u064a\u0641 \u0644\u0648 \u0625\u0646\u0647\u0627 \u0648\u0627\u0642\u0641\u0647>\\\\\n \\textit{Oh my lord... Thank God she has disability. What would have happened if she were not disabled?}\\\\\n \n \n \\item \\<\u064a\u0627 \u062a\u0631\u0649 \u0645\u062e\u0628\u064a\u0644\u0646\u0627 \u0627\u064a\u0647 \u064a\u0627 \u0686\u0648\u0646 \u0633\u0646\u0648 \u0627\u0646\u062a >\n \\\\\n <\u0648 \u0627\u0644\u0643\u0644\u0628\u0648\u0628\u0629 \u0627\u0644\u0644\u064a \u062c\u0646\u0628\u0643 \u062f\u064a>\n \\\\\n \\textit{I wonder what you, and this little pitch by your side, are hiding for us, John Snow?}\n \n\\setcounter{examples}{\\value{enumi}}\n\\end{enumerate}\n\n\\textbf{Examples for \\textit{not offensive} and \\textit{not hate-speech} tweets:}\\\\\n\n\\begin{enumerate} \n\\setcounter{enumi}{\\value{examples}}\n \\item \n <\u064a\u0627 \u0628\u0643\u0648\u0646 \u0628\u062d\u064a\u0627\u062a\u0643 \u0627\u0644\u0623\u0647\u0645 \u064a\u0627 \u0625\u0645\u0627 \u0645\u0627 \u0628\u062f\u064a \u0623\u0643\u0648\u0646> \\\\\n \\textit{Either I become the most important in your life, or I become nothing at all.}\\\\\n \\item <\u0627\u064a\u0634 \u0627\u0644\u0627\u0643\u0644 \u0627\u0644\u062d\u0644\u0648 \u0630\u0627 \u064a\u0627\u0633\u064f\u0645\u064a\u0647 \u062a\u0633\u0644\u0645 \u064a\u062f\u0643 \u064a\u0627\u0639\u0633\u0644>\\\\ <\u064a\u0627\u0642\u0634\u0637\u0647 \u064a\u0627\u062d\u0644\u0648\u0647 \u064a\u0627\u0633\u064f\u0643\u0631\u0647 \u064a\u0627 \u0637\u0628\u0627\u062e\u0647 \u064a\u0627 \u0641\u0646\u0627\u0646\u0647 \u064a\u0627 \u0643\u0644 \u0634\u064a>\\\\\n \\textit{Wow! How wonderful this food is, Sumaia! You're such a honey, beauty, sweetie, and good cook! You're are artist! You're everything!}\n \\setcounter{examples}{\\value{enumi}}\n \n\\end{enumerate}\n\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{cl|cl}\n\\hline\n\\textbf{Arabic Offensive} & \\textbf{English} & \\textbf{Arabic Hateful} & \\textbf{English} \\\\ \\hline\n<\u064a\u0627 \u062e\u064a\u064a\u062b> &\nYou, fat ass! \n& <\u064a\u0627 \u0645\u0646\u062c\u0627\u0648\u064a> \n& You're Manjawi \\\\\n<\u064a\u0627 \u0631\u0639\u0627\u0639> &\nYou're mobby & \n<\u064a\u0627 \u062f\u0646\u062f\u0631\u0648\u0627\u064a> \n& You're Dandarawi \\\\\n<\u064a\u0627 \u0645\u062a\u0634\u0631\u062f> &\nYou're a tramp \n& <\u064a\u0627 \u0633\u0639\u0648\u062f\u064a\u064a\u0646> \n& You Saudis \\\\ \n<\u064a\u0627 \u0645\u062c\u0627\u0646\u064a\u0646> \n& You're crazy \n& <\u064a\u0627 \u062f\u062d\u0628\u0627\u0634\u064a> \n& You're Dahbashi \\\\ \n<\u064a\u0627 \u062c\u0639\u0627\u0646> \n& You, hungry man! \n& <\u064a\u0627 \u0627\u062f\u0639\u0627\u0626\u064a> \n& You, false claimer \\\\ \n<\u064a\u0627 \u0641\u0627\u062c\u0631\u0647>\n& You, morally loose \n& <\u064a\u0627 \u062d\u0648\u062b\u064a> \n& You, Houthi \\\\ \n<\u064a\u0627 \u0634\u0645\u0627\u0644> \n& Oh, whore \n& <\u064a\u0627 \u0634\u064a\u0639\u064a> \n& You, Shiite \\\\ \n<\u064a\u0627 \u0632\u0628\u0627\u0644\u0647>\n& You, junky \n& <\u064a\u0627 \u0639\u0645\u064a\u0644> \n& You, spay \\\\ \n<\u064a\u0627 \u0628\u0647\u0627\u0627\u064a\u0645>\n& You, animals \n& <\u064a\u0627 \u0627\u062e\u0648\u0627\u0646\u062c\u064a> \n& You, Ikhwangis \\\\\n<\u064a\u0627 \u0628\u063a\u064a\u0636> &\nYou, hateful \n& <\u064a\u0627 \u0627\u062e\u0648\u0627\u0646> \n& You, Ikhwan \\\\ \n<\u064a\u0627 \u0648\u0633\u062e\u0647> \n& You, dirty woman \n& <\u064a\u0627 \u0627\u0628\u0646 \u0627\u0644\u062c\u0631\u0627\u0628\u064a\u0639> \n& You, son of tramps \\\\ \n<\u064a\u0627 \u0637\u0627\u063a\u064a\u0647> \n& You, tyrant \n& <\u064a\u0627 \u0627\u0628\u0646 \u0627\u0644\u062d\u0631\u0627\u0645>\n& You, bastard \\\\ \n<\u064a\u0627 \u0641\u0627\u062c\u0631> \n& You, salacious \n& <\u064a\u0627 \u0627\u0628\u0646 \u0627\u0644\u0632\u0646\u0627> \n& You, bastard \\\\ \n<\u064a\u0627 \u0645\u063a\u0641\u0644> \n& You, idiot \n& <\u064a\u0627 \u0627\u0628\u0646 \u0627\u0644\u064a\u0647\u0648\u062f\u064a\u0647>\n& You, son of Jewish woman \\\\ \n<\u064a\u0627 \u0631\u062e\u0645\u0647> &\nYou, silly woman \n& <\u064a\u0627 \u0627\u0628\u0646 \u0627\u0644\u0632\u0627\u0646\u064a\u0647>\n& You, son of adulterous woman\\\\ \n<\u064a\u0627 \u0646\u062d\u0633> &\nYou, sinister \n& <\u064a\u0627 \u0627\u0628\u0646 \u0627\u0644\u0645\u0648\u0647\u0648\u0645\u0647>\n& You, son of deceived woman \\\\ \n<\u064a\u0627 \u063a\u0628\u0627\u0621> \n& You, stupid head &\n<\u064a\u0627 \u0627\u0628\u0646 \u0627\u0644\u0639\u0631\u0635> \n& You, son of pimp \\\\ \n<\u064a\u0627 \u0643\u0626\u064a\u0628> \n& You, gloomy head \n& <\u064a\u0627 \u0627\u0628\u0646 \u0627\u0644\u0645\u062a\u0646\u0627\u0643\u0647>\n& You, son of adulterous \\\\\n<\u064a\u0627 \u0645\u0631\u0627> \n& You, unworthy woman &\n<\u064a\u0627 \u0627\u0645\u0627\u0631\u0627\u062a> \n& You, Emirate \\\\ \n<\u064a\u0627 \u062d\u0645\u0642\u064a> \n& You, fools \n& <\u064a\u0627 \u0627\u062a\u062d\u0627\u062f\u064a> \n& You, Itihadi \\\\ \\hline\n\\end{tabular}\n\\caption{Examples of offensive and hateful seeds in our lexica}\n\\label{tab:OFF_HS_seed}\n\\end{table*}\n\n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{c|l}\n\\hline\n\\textbf{Arabic Non-OFF\/Non-HS} & \\textbf{English} \\\\\\hline\n<\u064a\u0627 \u0627\u0628\u0637\u0627\u0644> \n& You, heros \\\\\n<\u064a\u0627 \u0641\u0646\u0627\u0646> \n& You, artist \\\\\n<\u064a\u0627 \u0639\u0628\u0633\u0645\u064a\u0639> \n& You, Absemee' \\\\\n<\u064a\u0627 \u0639\u0627\u0627\u0644\u0645> \n& Oh, people \\\\\n<\u064a\u0627 \u0645\u0646\u0633\u064a> \n& Oh, forgotten man\\\\\n<\u064a\u0627 \u0645\u0627\u0639\u0646\u062f\u0643> \n& You have a lot \\\\\n<\u064a\u0627 \u0645\u0627\u0645\u0627> \n& Oh, mum \\\\ \n<\u064a\u0627 \u0642\u0645\u0631\u0631> \n& You, beautiful lady \\\\\n<\u064a\u0627 \u062c\u0627\u0645\u062f> \n& You, wonderful man \\\\\n<\u064a\u0627 \u0637\u0641\u0644> \n& Oh, child \\\\\n<\u064a\u0627 \u0642\u0637\u0647> \n& Oh, delicate lady \\\\\n<\u064a\u0627 \u062d\u064a\u0644\u062a\u0647\u0627> \n& You, lulled \\\\\n<\u064a\u0627 \u0627\u064a\u062a\u0647\u0627> \n& Oh you\\dots \\\\ \n<\u064a\u0627 \u0631\u0627\u0639\u064a> \n& You, caregiver \\\\\n<\u064a\u0627 \u062d\u0628\u064a\u0628\u064a> \n& Oh, darling \\\\ \n<\u064a\u0627 \u0631\u0628> \n& Oh, my Lord \\\\\n<\u064a\u0627 \u0648\u0627\u062d\u062f> \n& Oh, the One \\\\\n<\u064a\u0627 \u0646\u0627\u0633> \n& You, people \\\\\n<\u064a\u0627 \u0627\u062e\u0631> \n& Oh, the Last \\\\\n<\u064a\u0627 \u0628\u0627\u0628\u0627> \n& Oh, daddy \\\\\n\\hline\n\\end{tabular}\n\\caption{Examples of non-offensive\/non-hateful seeds filtered out from our lexica.}\n\\label{tab:Not_OFF_HS_seed}\n\\end{table}\n\\subsection{Data Augmentation}~\\label{subsec:data_aug}\nAs explained earlier, the positive class in the offensive sub-task (i.e., the category `offensive') is only 20\\% and in the hateful sub-task (i.e., the class `hateful') it is only 5\\%. Since our goal is to develop exclusively deep learning models, we needed to extend our training data such that we increase the positive samples. For this reason, we develop a simple method to automatically augment our training data. Our method first depends on extracting tweets that contain any of a seed lexicon (explained below) and satisfy a predicted sentiment label condition. We hypothesize that both offensive and hateful content would carry negative sentiment and so it would be intuitive to restrict any automatically extracted tweets to those that carry these negative sentiment labels. To further test this hypothesis, we analyzing the distribution of the sentiment classes in the TRAIN split using an off-the-shelf tool, \\textit{AraNet}~\\cite{mageed_osact4}. As shown in Figure \\ref{img:aranet}, AraNet assigns sensible sentiment labels to the data. For the `offensive' class, the tool assigns 65\\% negative sentiment tags and for the non-offensive class it assigns only 60\\% positive sentiment labels.~\\footnote{AraNet~\\cite{mageed_osact4} assigns only positive and negative sentiment labels. In other words, it does not assign neutral labels.} For the hate speech data, we find that AraNet assigns 72\\% negative labels to the `hateful' class and 55\\% positive sentiment labels for the `non-hateful' class. Based on this analysis, we decide to impose a sentiment-label condition on the automatically extended data as explained earlier. In other words, we only choose `offensive' and `hateful' class data from tweets predicted as negative sentiment. Similarly, we only choose `non-offensive' and `non-hateful' tweets assigned positive sentiment labels by AraNet. We now explain how we extend the dataset. We now explain our approach to extract tweets with an offensive and hateful seed lexicon. \\\\\n\nTo generate a seed lexicon, we extract all words that follow the \\textit{Ya} (\\textit{Oh, you}) in the shared task TRAIN split positive class in the two sub-tasks (i.e., `offensive' and `hateful'). The intuition here is that the word \\textit{Ya} acts as a trigger word that is likely to be followed by negative lexica. This gives us a set of 2,158. We find that this set can have words that are neither offensive nor hateful outside context and so we manually select a smaller set of 352 words that we believe are much more likely to be effective offensive seeds and only 38 words that we judge as more suitable carriers of hateful content. Table~\\ref{tab:OFF_HS_seed} shows samples of the offensive and hateful seeds. Table~\\ref{tab:Not_OFF_HS_seed} shows examples of seeds in our initial larger set that we filtered out since these are less likely to carry negative meaning (whether offensive or hateful).\\\\\n\nTo extend the offensive and hateful tweets, we use 500K randomly sampled, unlabeled, tweets from~\\cite{abdul2019dianet} that each have at least one occurrence of the trigger word \\textit{Ya} and at least one occurrence of a word from either of our two seed lexica (i.e., the offensive and hateful seeds).~\\footnote{The 500K collection is extracted via searching a larger sample of $\\sim 21M$ tweets that all have the trigger word \\textit{Ya}. This corpus is also taken from~\\cite{abdul2019dianet}. Note that a tweet can have both an offensive and a hateful seed.} We then apply AraNet \\cite{mageed_osact4} on this 500K collection and keep only tweets assigned negative sentiment labels. Tweets that carry offensive seeds are labeled as `offensive' and those carrying hateful seeds are tagged as `hateful'.\nThis gives us 265,413 offensive tweets and 10,489 hateful tweets. For reference, the majority (\\%=67) of the collection extracted with our seed lexicon are assigned negative sentiment labels by AraNet. This reflects the effectiveness of our lexicon as it matches our observations about the distribution of sentiment labels in the shared task TRAIN split.\\\\\n\nTo add positive class data (i.e., `not-offensive' and `not-hateful') to this augmented collection, we randomly sample another 500K tweets that carry \\textit{Ya} from ~\\cite{abdul2019dianet} that do not carry any of the two offensive and hateful seed lexica. We apply AraNet on these tweets and keep only tweets assigned a positive sentiment label (\\%=70). We use 215,365 tweets as `non-offensive' but only 199,291 as `non-hateful'.~\\footnote{We decided to keep only 199,291 `non-hateful' tweets since our augmented `hateful' class comprises only 10,489 tweets.} Table~\\ref{tab:distrib_table} shows the size and distribution of class labels in our extended dataset.\\\\\n\nFigure~\\ref{img:AUG_HS_ONLY} and Figure~\\ref{img:AUG_OFF_ONLY} are word clouds of unigrams in our extended training data (offensive and hateful speech, respectively) after we remove our seed lexica from the data. The clouds show that the data carries lexical cues likely to occur in each of the two classes (offensive and hateful). Examples of frequent words in the offensive class include \\textit{dog, animal, son of, mother, dirty woman, monster, mad}, and \\textit{on you}. Examples in the hateful data include \\textit{shut up, dogs, son of, animal, dog, haha}, and \\textit{for this reason}. We note that the hateful words do not include direct names of groups since these were primarily our seeds that we removed before we prepare the word cloud. Overall, the clouds provide sensible cues of our phenomena of interest across the two tasks.\n\n\\begin{figure}[h]\n\\includegraphics[width=8cm]{images\/AUG_OFF_ONLY_notin_seed.png}\n\\caption{A word cloud of unigrams in our extended training offensive data (AUG-TRAIN-OFF).}\n\\label{img:AUG_OFF_ONLY}\n\\end{figure}\n\n\\begin{figure}[h]\n\\includegraphics[width=8cm]{images\/AUG_HS_ONLY_notin_seed.png}\n\\caption{A word cloud of unigrams in our extended training hate speech data (AUG-TRAIN-HS).}\n\\label{img:AUG_HS_ONLY}\n\\end{figure}\n\n\n\n\\begin{figure}[h]\n\\includegraphics[width=8cm]{images\/OSACT4.png}\n\\caption{Distribution of Negative and Positive Tweets after applied AraNet on Shared-Task TRAIN Data }\n\\label{img:aranet}\n\\end{figure}\n\n\\section{Models}~\\label{sec:model}\n\n\n\\begin{table*}[]\n\\centering\n\\begin{tabular}{lcc|cc|cc|cc}\n\\cline{2-9}\n\\multirow{2}{*}{} & \\multicolumn{4}{c|}{\\textbf{Dev}} & \\multicolumn{4}{c}{\\textbf{Test}} \\\\ \\cline{2-9} \n & \\multicolumn{2}{c}{\\textbf{OFF}} & \\multicolumn{2}{c|}{\\textbf{HS}} & \\textbf{OFF} & \\textbf{} & \\textbf{HS} & \\textbf{} \\\\ \\hline\n\\textbf{Model} & \\textbf{Acc} & \\textbf{F1} & \\textbf{Acc} & \\textbf{F1} & \\textbf{Acc} & \\textbf{F1} & \\textbf{Acc} & \\textbf{F1} \\\\ \\hline\n\\textbf{BERT} & 87.10 & 78.38 & \\textbf{95.70} & \\textbf{70.96} & 87.30 & 77.70 & \\textbf{95.20} & \\textbf{70.51} \\\\ \\hline\n\\textbf{BERT-SENTI} & 87.40 & 78.84 & 95.50 & 68.01 & 87.45 & 80.51 & 93.15 & 61.57 \\\\ \n\\textbf{BERT-EMO} & 88.30 & 80.39 & 95.40 & 68.54 & -- & -- & -- & -- \\\\ \\hline\n\\textbf{BERT-EMO-AUG} & \\textbf{89.60} & \\textbf{82.31} & 93.90 & 62.52 & \\textbf{89.35} & \\textbf{82.85} & -- & -- \\\\ \\hline\n\\end{tabular}\n\\caption{Offensive (OFF) and Hate Speech (HS) results on DEV and TEST datasets }\n\\label{tab:results_table}\n\\end{table*}\n\n\n\\subsection{Data Pre-Processing}\n\nWe perform light Twitter-specific data cleaning (e.g., replacing numbers, usernames, hashtags, and hyperlinks by unique tokens NUM, USER, HASH, and URL respectively). We also perform Arabic-specific normalization (e.g., removing diacritics and mapping various forms of Alef and Yeh each to a canonical form). For text tokenization, we use byte-pair encoding (PBE) as implemented in Multilingual Cased BERT model.\\\\\n\n\n\\subsection{BERT}\n\nOur experiments are based on BERT-Base Multilingual Cased model released by~\\cite{devlin2018bert}~\\footnote{\\url{https:\/\/github.com\/google-research\/bert\/blob\/master\/multilingual.md}.}. BERT stands for \\textbf{B}idirectional \\textbf{E}ncoder \\textbf{R}epresentations from \\textbf{T}ransformers. It is an approach for language modeling that involves two self-supervised learning tasks, (1) masked language models (MLM) and (2) next sentence predication (NSP). BERT is equipped with an Encoder architecture which naturally conditions on bi-directional context. It randomly masks a given percentage of input tokens and attempts to predict these masked tokens. ~\\cite{devlin2018bert} mask 15\\% of the tokens (the authors use \\textit{word pieces}) and use the hidden states of these masked tokens from last layer for prediction. To understand the relationship between two sentences, the BERT also pre-trains with a binarized NSP task, which is also a type of self-supervises learning. For the sentence pairs (e.g., \\textit{A}-\\textit{B}) in pre-training examples, 50\\% of the time \\textit{B} is the actual next sentence that follows A in the corpus (positive class) and 50\\% of the time \\textit{B} is a random sentence from corpus (negative class). Google's pre-trained BERT-Base Multilingual Cased model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The model has 119,547 shared word pieces vocabulary, and was pre-trained on the entire Wikipedia for each language. \\\\\n\n\n\nIn our experiments, we train our classification models on BERT-Base Multilingual Cased model. \nFor all of our fine-tuning BERT models, we use a maximum sequence size of 50 tokens and a batch size of 32. We add a `[CLS]' token at the beginning of each input sequence and, then, feed the final hidden state of `[CLS]' to a Softmax linear layer to get predication probabilities across classes. We set the learning rate to $2e-6$ and train for 20 epochs. We save the checkpoint at the end of each epoch, report F1-score and accuracy of the best model, and use the best checkpoint to predict the labels of the TEST set. We fine-tune the BERT model under five settings. We describe each of these next.\\\\\n\n\\textbf{Vanilla BERT:} We fine-tune BERT-Base Multilingual Cased model on TRAIN set of offensive task and hate speech task respectively. We refer these two models to \\textit{BERT}. The offensive model obtains the best result with 8 epochs. As Table~\\ref{tab:results_table} shows, for \\textit{offensive language} classification, this model obtains 87.10\\% accuracy and 78.38 $F_1$ score on DEV set. We submit the TEST prediction of this model to the shared task and obtain 87.30\\% accuracy and 77.70 $F_1$ on the TEST set. The \\textit{hate speech} model obtains best result (accuracy = 95.7-\\%, $F_1$ = 70.96) with 6 epochs. \\\\\n\n\\textbf{BERT-SENTI}\nWe use a BERT model fine-tuned with on binary Arabic sentiment dataset as released by~\\cite{mageed_osact4}. We use this off-the-shelf (already trained) model to further fine-tune on offensive and hate speech tasks, respectively. We replace the Softmax linear layer for sentiment classification with a randomly initialized Softmax linear layer for each task. We refer to these two models as BERT-SENTI. We train the BERT-SENTI models on the TRAIN sets for offensive and hate speech tasks respectively. On $F_1$ score, BERT-SENTI is 0.3 better than vanilla BERT on the offensive task, but 2.95 lower (than vanilla BERT) on the hate speech task. We submit the TEST predictions of both tasks. The offensive model obtain 87.45\\% accuracy and 80.51 $F_1$ on TEST. The hate speech model acquire 93.15\\% accuracy and 61.57 $F_1$ on TEST. \\\\ \n\n\\textbf{BERT-EMO}\nSimilar to BERT-SENTI, we use a BERT model trained on 8-class Arabic emotion identification from~\\cite{mageed_osact4} to fine-tune on the offensive and hate speech tasks, respectively. We refer to this setting as BERT-EMO. We train the models on the TRAIN sets for both offensive and hate speech tasks for 20 epochs. The \\textit{offensive} model obtains its best result (accuracy = 88.30\\%, $F_1$ = 80.39) with 11 epochs. The \\textit{hate speech} model acquires its best result (accuracy = 95.40\\%, $F_1$ = 68.54) also with 11 epochs. We do not submit an BERT-EMO on the hate speech task TEST set.\\\\\n\n\n\n\\textbf{BERT-EMO-AUG}\nSimilar to BERT-EMO, we also fine-tune the emotion BERT model (BERT-EMO) with the augmented offensive dataset (AUG-TRAIN-OFF) and augmented hate speech dataset (AUG-TRAIN-HS). On the DEV set, the \\textit{offensive model} acquires its best result (accuracy = 89.60\\%, $F_1$ = 82.31) with 13 epochs. The best results for the \\textit{hate speech model} (accuracy = 93.90\\%, $F_1$ = 62.52) is obtained with 9 epochs. Our best offensive predication on TEST is BERT-EMO-AUG. It which achieves an accuracy of 89.35\\% and $F_1$ of 82.85. We do not submit an BERT-EMO-AUG on the hate speech task TEST set.\n\n\\section{Conclusion}\\label{sec:conc}\nWe described our submission to the offensive language detection in Arabic shared task. We offered a simple method to extend training data and demonstrated the utility of such augmented data empirically. We also deploy affective language models on the two sub-tasks of offensive language detection and hate speech identification. We show that fine-tuning such affective models is useful, especially in the case of offensive language detection. In the future, we will investigate other methods for improving our automatic offensive and hateful language acquisition methods. We also explore other machine learning methods on the tasks. For example, we plan to investigate the utility of semi-supervised methods as a vehicle of improving our models.\n\n\\section{Bibliographic References}\n\n\n \n \n \n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nWith the upcoming large-scale surveys, such as Euclid and LSST, a huge number of mock catalogs have to be generated to estimate the covariance matrix. Straight $N$-body simulations are too numerically expensive to be done in mass production, so many semi-analytic approaches such as PThalos \\cite{ScoccimarroSheth2002,Maneraetal2013} PINOCCHIO \\cite{Monacaetal2002,Monacoetal2013,Heisenbergetal2011}, and COLAR \\cite{Tsssevetal2013}, have been developed. These methods rely on Lagrangian perturbation theory (LPT) to displace the particles at large scales. \n\n\nIn LPT, the fundamental object is the Lagrangian displacement field $\\mb{ \\Psi} $, which displaces the particles from the initial position $\\mathbf{q} $ to its final Eulerian position $\\mathbf{x} $\n\\begin{equation}\n\\label{eq:PsiDefinition}\n\\mb{ \\Psi} ( \\mathbf{q}, t ) \\equiv \\mathbf{x}( \\mathbf{q}, t ) - \\mathbf{q}. \n\\end{equation} \n\n$\\mb{ \\Psi} $ can be computed using LPT. The first order LPT is the well-known Zel'dovich Approximation (ZA) \\cite{Zeldovich1970} and it has been extended to higher orders \\cite{Buchert1994,Catelan95,CatelanTheuns96,Bouchetetal1995,RampfBuchert12,Rampf2012}. The initial conditions for $N $-body simulations are often generated using ZA or second order LPT (2LPT) \\cite{Scoccimarro98,CroccePeublasetal2006}. The validity of LPT in computing the power spectrum has been improved by resummation \\cite{Matsubara08a}, and it can be easily extended to include redshift space distortion and local Lagrangian bias \\cite{Matsubara08b,Matsubara11}. \n\n\n LPT is very successful at high redshifts but it yields poor results at late times due to severe shell crossing. Shell crossing occurs when particles from different Lagrangian patches meet to form caustics and multiple streams pass through the same Eulerian position. The standard perturbation theory and LPT are based on the single stream approximation \\cite{PTreview}. Before shell crossing, the system can be described by a velocity field. However, shell crossing generates the velocity dispersion tensor in small scales, which also sources the vorticity \\cite{PueblasScoccimarro2009}. In LPT, the velocity field is also specified by the position $\\mathbf{x} $, so it is not valid after shell crossing. Indeed the Eulerian density obtained from LPT becomes singular after shell crossing \\cite{ShandarinZeldovich1989}. After shell crossing, the particles keep on escaping from each other, resulting in low power in small scales. For example, the ZA dark matter density power spectrum is even lower than the linear one at $z=0$. There are many attempts to extend the validity of LPT after shell crossing \\cite{MelottPellmanetal1994, KitauraSreffen2012, Leclerceetal2013}. One of the concrete models that takes shell crossing into account is the adhesion model \\cite{Gurbatovetal1989}, in which a viscous term is added to the ZA model to stick the particles together after shell crossing, and the equation is transformed to Burgers' equation \\cite{Gurbatovetal1989,Vergassolaetal1994,Valageas2011,ShandarinZeldovich1989}. Note that the solution to the Burgers' equation is a velocity field. In practice, to get the position of the particles, one still needs to integrate the velocity field numerically \\cite{WeinbergGunn}. Even in the limit of zero viscosity, the geometrical construction is not so straightforward \\cite{Gurbatovetal1989,Vergassolaetal1994}. \n\n\n\n\n There are few studies on $\\mb{ \\Psi} $ directly, if any. In this paper, we shall extract $\\mb{ \\Psi} $ from $N$-body simulation directly, and this will enable us to probe $\\mb{ \\Psi} $ even after shell crossing. To study $ \\mb{ \\Psi} $ directly is interesting because $ \\mb{ \\Psi} $ is the fundamental object in LPT and there are few analytical tools available to study $\\mb{ \\Psi} $ after shell crossing. The goal of this paper is to better understand the physics of shell crossing on $\\mb{ \\Psi} $ by examining $\\mb{ \\Psi} $ obtained from simulation. This may potentially lead to better modeling of LPT at late time. We will also examine some modifications of LPT. In particular, we attempt to incorporate the power suppression in small scales due to shell crossing with an effective potential. It turns out that, as shell crossing is a highly nonlinear process, this phenomenological approach is of limited success. In LPT, $\\mb{ \\Psi} $ is often taken to be potential. Another question that we want to address in this paper is whether the potential assumption is still valid at low redshifts. In particular, we will quantify how important the vector part of $ \\mb{ \\Psi} $ is at late time, when LPT is known to break down. We shall decompose the numerical $\\mb{ \\Psi} $ into scalar and vector parts. Very often in the studies related to LPT, when compared with simulation, only the density power spectrum is considered. This is justifiable as the density field is the final observable. However, as $\\mb{ \\Psi} $ plays a central role in LPT, we believe that studying it in its own right is worthwhile. \n\n\nThe paper is organized as follows. We will describe the decomposition method in Sec.~\\ref{sec:HelmholtzPsiGeneral}. LPT is reviewed, and the loop corrections to power spectrum of $\\mb{ \\Psi} $ are written down in Sec.~\\ref{sec:LPTreview}. The numerical results for the decomposition of $\\mb{ \\Psi} $ are presented in Sec.~\\ref{sec:NumericalResults}. We will show the scalar and vector power spectrum of $\\mb{ \\Psi} $ in details in Sec.~\\ref{sec:PkPsiNumerical}. In Sec.~\\ref{sec:ModificationLPT}, we examine LPT and a couple of variants of LPT using density power spectrum. In particular, we include a suppression factor in the displacement potential to modify LPT. We explore the scatter plot of $\\nabla \\cdot \\Psi $ in Sec.~\\ref{sec:ScatterDivPsi}. We conclude in Sec.~\\ref{sec:Conclusions}. The general structure of the power spectrum of $\\mb{ \\Psi} $ is written down in Appendix \\ref{sec:GeneralPsiPk}. In Appendix \\ref{sec:TestCases}, we test the decomposition algorithm with some test cases. \n\n\n\n\\section{Helmholtz decomposition of $ \\mb{ \\Psi} $ and its power spectra}\n\n\\subsection{Helmholtz decomposition of $ \\mb{ \\Psi} $ }\n\\label{sec:HelmholtzPsiGeneral}\n\n\nAny smooth vector field $\\mb{ \\Psi} $ can be decomposed into the form \\footnote{The uniqueness of the decomposition generally depends on the boundary conditions. If it is not unique, the difference is due to a harmonic part which is both divergence-less and curl-free, or it can be written in terms of a potential, which is harmonic. In electromagnetism, if we require that the field vanishes at infinity, then because harmonic function cannot have local extremum, it must vanish. Here, we impose the periodic boundary condition in simulation, the only smooth function that satisfies the periodic boundary condition without local extremum in each dimension is a constant function. If we further require the field to have zero mean, then it must vanish everywhere. }\n\\begin{equation}\n\\mb{ \\Psi} = \\nabla \\Phi + \\nabla \\times \\mathbf{A} \n\\end{equation}\nwhere $\\Phi$ is the scalar potential and $\\mathbf{A}$ is the vector potential. We stress that the derivatives are with respect to the Lagrangian coordinates. This kind of decomposition has been widely used in physics, for example, in the decomposition of the electric field in electromagnetism \\cite{Jackson}, and the cosmological perturbation theory \\cite{Bertschinger1995}. Recently, it has been applied to redshift space distortion as different components correspond to different physical origins \\cite{ Zhangetal2013,Zhengetal2013}. The scalar and vector potentials can be solved through the Poisson equations \n\\begin{eqnarray}\n\\label{eq:Poisson_Phi}\n\\nabla^2 \\Phi &= &\\nabla \\cdot \\mb{ \\Psi} , \\\\\n\\label{eq:Poisson_A}\n\\nabla^2 \\mathbf{A} &=& - \\nabla \\times \\mb{ \\Psi} . \n\\end{eqnarray}\n\n\nIn Fourier space, the helicity basis is convenient for decomposing of $ \\mb{ \\Psi} $ into scalar and vector parts. The helicity basis vectors are defined as \n\\begin{eqnarray}\n\\label{eq:Helicity0}\n\\hat{ \\mathbf{k}}_0 & = & \\hat{ \\mathbf{k} }, \\\\\n\\label{eq:HelicityPlus}\n\\hat{ \\mathbf{k}}_{+} &=& \\frac{1 }{ \\sqrt{2} }( \\hat{\\mathbf{k}}_{\\theta } + i \\hat{\\mathbf{k}}_{\\phi} ), \\\\\n\\label{eq:HelicityMinus}\n\\hat{ \\mathbf{k}}_{-} &=& \\frac{1 }{ \\sqrt{2} }( \\hat{\\mathbf{k}}_{\\theta } - i \\hat{\\mathbf{k}}_{\\phi} ) ,\n\\end{eqnarray}\nwhere $ \\hat{ \\mathbf{k} }$, $ \\hat{\\mathbf{k}}_{\\theta } $ and $\\hat{\\mathbf{k}}_{\\phi} $ are the basis vectors in spherical coordinates. Then the scalar part is given by the helicity-0 mode, $\\hat{ \\mathbf{k}}_0$ component, and the vector part is decomposed into the helicity-$\\pm $ modes, the $\\hat{ \\mathbf{k}}_{+}$ and $\\hat{ \\mathbf{k}}_{-}$ components. We shall make use of this basis in measuring the power spectrum. We use the terminology, scalar and vector decompositions and longitudinal and transverse parts, potential and curl parts, and helicity-0 and helicity-$ \\pm $ interchangeably in this paper. \n\nWe stress that in standard LPT, the displacement field is almost fully potential. At late time LPT is known to break down due to severe shell crossing. Thus the generation of the vector part in $\\mb{ \\Psi} $ can help understand shell crossing and shred light on the break down of LPT at late time. \n\n\n\\subsection{$ \\mb{ \\Psi} $ from Lagrangian Perturbation Theory}\n\\label{sec:LPTreview}\n\nWe review LPT in this section. To facilitate the comparison with numerical power spectrum of $\\mb{ \\Psi} $, we shall write down the loop corrections to the power spectrum of $\\mb{ \\Psi} $ from LPT. We will also describe the recipes to generate LPT catalogs numerically. We emphasize that the review of LPT here serves as a check and comparison with the numerical results shown later on; in this paper, we are more interested in exploring the effects that are not captured by the LPT discussed here. \n\n\\subsubsection{Power spectrum of $\\mb{ \\Psi} $ from LPT }\n\n\nIn Appendix \\ref{sec:GeneralPsiPk}, we show the general structure of the power spectrum of the scalar and vector components. Here we will write down the 1-loop power spectrum of $\\mb{ \\Psi} $ from LPT. \n\nThis displacement field $\\mb{ \\Psi} $ can be expanded in terms of the linear dark matter density contrast in LPT. Up to third order, it is given by \n\\begin{equation}\n\\label{eq:PsiExpansion}\n\\mb{ \\Psi} = \\mb{ \\Psi} ^{ (1) } + \\mb{ \\Psi} ^{ (2) } + \\mb{ \\Psi} ^{ (3a) } + \\mb{ \\Psi} ^{ (3b) } + \\mb{ \\Psi} ^{ (3c) } ,\n\\end{equation} \nwhere\n\\begin{eqnarray}\n\\mb{ \\Psi} ^{( n )} (\\mathbf{k}, t) & = & i D^n(t) \\int d^3 p_1 \\dots d^3 p_n \\delta_{\\rm D} ( \\mathbf{k} - \\mathbf{p}_{1 \\dots n } ) \\\\\n&\\times & \\mathbf{L}^{ (n) } ( \\mathbf{p}_1, \\dots , \\mathbf{p}_n ) \\delta_0 ( \\mathbf{p}_1 ) \\dots \\delta_0 ( \\mathbf{p}_n), \\nonumber\n\\end{eqnarray}\nwhere $D$ is the linear growth factor and $ \\mathbf{p}_{1 \\dots n } $ denotes $\\mathbf{p}_1 + \\dots + \\mathbf{p}_n $, $\\delta_0 $ is the initial linear dark matter density contrast, and $ \\delta_{\\rm D} $ is the Dirac delta function. The Lagrangian displacement kernels are given by \\cite{Catelan95, CatelanTheuns96, Matsubara08a, RampfBuchert12}\n\\begin{eqnarray}\n\\label{eq:1LPTZA}\n\\mathbf{L}^{(1)} ( \\mathbf{p}_1 ) &=& \\frac{ \\mathbf{p}_1 } { p_1^2 } ,\\\\\n\\label{eq:2LPT}\n\\mathbf{L}^{(2)} ( \\mathbf{p}_1,\\mathbf{p}_2) &=& \\frac{ 3 }{ 14 } \\frac{ \\mathbf{p}_{12} } { p_{12}^2 } \\Big[ 1 - \\frac{ ( \\mathbf{p}_1 \\cdot \\mathbf{p}_2 )^2 } { p_1^2 p_2^2 } \\Big] , \\\\\n\\mathbf{L}^{(3a)}_{\\rm a} ( \\mathbf{p}_1, \\mathbf{p}_2, \\mathbf{p}_3 ) & = & - \\frac{ 1 }{ 18 } \\frac{ \\mathbf{p}_{123} } { p_{123}^2 } \\Big[ 1 - 3 \\frac{ ( \\mathbf{p}_1 \\cdot \\mathbf{p}_2 )^2 }{ p_1^2 p_2^2 } \\\\\n& +& 2 \\frac{ (\\mathbf{p}_1 \\cdot \\mathbf{p}_2)( \\mathbf{p}_2 \\cdot \\mathbf{p}_3)( \\mathbf{p}_3 \\cdot \\mathbf{p}_1 ) } { p_1^2 p_2^2 p_3^2 } \\Big] , \\nonumber \\\\\n\\mathbf{L}_{\\rm a}^{(3b) } ( \\mathbf{p}_1,\\mathbf{p}_2,\\mathbf{p}_3 ) &=& \\frac{ 5 }{ 42 } \\frac{ \\mathbf{p}_{123} } { p_{123}^2 } \\Big[ 1 - \\frac{ ( \\mathbf{p}_1 \\cdot \\mathbf{p}_2 )^2 }{ p_1^2 p_2^2 } \\Big] \\\\\n& \\times & \\Big[ 1 - \\Big( \\frac{ \\mathbf{p}_{12} \\cdot \\mathbf{p}_3 }{ p_{12} p_3 } \\Big)^2 \\Big] , \\nonumber \\\\\n\\mathbf{L}^{(3c)}_{\\rm a} ( \\mathbf{p}_1, \\mathbf{p}_2, \\mathbf{p}_3 ) & = & \\frac{ 1 }{ 14 } \\frac{ \\mathbf{p}_1 \\cdot \\mathbf{p}_{23} }{ p_1^2 p_{23}^2 p_{123}^2 } \\Big[ 1 - \\frac{ ( \\mathbf{p}_2 \\cdot \\mathbf{p}_3 )^2 }{ p_2^2 p_3^2 } \\Big] \\\\ \n& \\times & [ \\mathbf{p}_1 ( \\mathbf{p}_{123} \\cdot \\mathbf{ p }_{23} ) - \\mathbf{p}_{23} ( \\mathbf{p}_{123} \\cdot \\mathbf{ p }_{1} ) ] . \\nonumber \n\\end{eqnarray}\nThe kernels $\\mathbf{L}^{(3)}_{\\rm a} $ are asymmetric with respect to the arguments, and we will symmetrize them as \n\\begin{equation}\n\\mathbf{L}^{(3)} (\\mathbf{p}_1, \\mathbf{p}_2 , \\mathbf{p}_3 ) = \\frac{ 1 }{ 3 } [ \\mathbf{L}^{(3) }_{\\rm a} ( \\mathbf{p}_1, \\mathbf{p}_2 , \\mathbf{p}_3 ) + 2 \\, \\rm{cyc.} ] . \n\\end{equation}\nThe first order kernel Eq.~\\ref{eq:1LPTZA} corresponds to the 1LPT, \\textit{i.e.}~the ZA \\cite{Zeldovich1970}, and Eq.~\\ref{eq:2LPT} is the 2LPT. Except $\\mathbf{L}^{(3c)}$, all the other kernels are proportional to $\\mathbf{p}_{1,\\dots n} $, where $n$ is the order, and so they are potential. Note that in LPT, it is still a potential flow in Eulerian space, and the appearance of the curl part kernel $\\mathbf{L}^{(3c)}$ is due to the coordinate transformation from the Lagrangian space to the Eulerian space \\cite{Catelan95}. \n\n\nThe power spectrum of $\\mb{ \\Psi} $ is defined as \n\\begin{eqnarray}\n\\langle \\mb{ \\Psi} _i (\\mathbf{k}_1 ) \\mb{ \\Psi} _j (\\mathbf{k}_2) \\rangle = P_{ij}(k_1) \\delta_{\\rm D} ( \\mathbf{k}_{12} ).\n\\end{eqnarray}\nUsing the expansion of $\\mb{ \\Psi} $, Eq.~\\ref{eq:PsiExpansion}, we can compute the power spectrum. Up to 1-loop, they are given by \n\\begin{eqnarray}\n\\label{eq:PPsi11}\nP_{ij}^{11}(k) &=& D^2 \\mathbf{L}_i^{(1)}( \\mathbf{k} ) \\mathbf{L}_j^{(1)}( \\mathbf{k} ) P_0(k) ,\\\\\n\\label{eq:PPsi22}\nP_{ij}^{22}(k) &=& 2 D^4 \\int d^3 q \\mathbf{L}_i^{(2)}( \\mathbf{q}, \\mathbf{k} - \\mathbf{q} ) \\\\\n& \\times & \\mathbf{L}_j^{(2)}( \\mathbf{q}, \\mathbf{k} - \\mathbf{q} ) P_0 (q) P_0(|\\mathbf{k} - \\mathbf{q} |) , \\nonumber \\\\ \n\\label{eq:PPsi13}\nP_{ij}^{13}(k) &=& 6 D^4 P_0(k) \\mathbf{L}_i^{(1)}(\\mathbf{k}) \\int d^3 q \\\\\n &\\times & \\mathbf{L}_j^{(3)} ( \\mathbf{k} , - \\mathbf{q} , \\mathbf{q} ) P_0(q) , \\nonumber \n\\end{eqnarray}\nwhere $P_0 $ is the initial power spectrum. The integral in Eq.~\\ref{eq:PPsi13} can be further simplified. For the longitudinal part, we have\n\\begin{eqnarray}\n&& \\int d^3 q \\mathbf{L}_j^{(3L)} ( \\mathbf{k} , - \\mathbf{q} , \\mathbf{q} ) P_0(q) = \\frac{5 \\pi }{3024 } \\frac{ \\mathbf{k} }{ k^5 } \\int dq \\\\ \n& \\times & \\frac{ P_0(q) }{ q^3 } \\Big[ -12 k^7 q + 44 k^5 q^3 + 44 k^3 q^5 \\nonumber \\\\\n& -& 12 k q^7 + 3 (k^2 - q^2 )^4 \\ln \\frac{(k+q)^2 }{(k-q)^2} \\Big] . \\nonumber \n\\end{eqnarray}\nFor the transverse part, the integral is given by\n\\begin{eqnarray}\n&& \\int d^3 q P_0( q ) \\frac{ \\mathbf{k} \\cdot \\mathbf{q} }{ 21 k^2 q^2 | \\mathbf{k} + \\mathbf{q } |^2 } \\\\\n& \\times & \\Big( 1 - \\frac{ ( \\mathbf{q} \\cdot \\mathbf{k})^2 }{q^2 k^2 } \\Big) \\mathbf{k} \\times ( \\mathbf{q} \\times \\mathbf{k} ), \\nonumber\n\\end{eqnarray}\nwhich vanishes upon integration. In fact, this follows from the argument given in Appendix ~\\ref{sec:GeneralPsiPk} that the cross power spectrum between the scalar and vector part vanishes. Therefore, the lowest order vector contribution to the power spectrum of $ \\mb{ \\Psi} $ arises from the auto power spectrum of $\\mb{ \\Psi} ^{ 3 c} $, and it is a 2-loop contribution. The lowest order vector contribution reads \n\\begin{eqnarray}\n\\label{eq:PPsi33v}\nP^{33 \\rm v}_{ij} (k) &= &6 D^6 \\int d^3 q_1 \\int d^3 q_2 P_0 (q_1) P_0 (q_2) P_0 ( | \\mathbf{k} - \\mathbf{ q}_{12} | ) \\nonumber \\\\\n & \\times & \\mathbf{L}^{3 c}_i (\\mathbf{q}_1, \\mathbf{q}_2, \\mathbf{k} - \\mathbf{ q}_{12} ) \\mathbf{ L}^{3c}_j (\\mathbf{q}_1, \\mathbf{q}_2, \\mathbf{k} - \\mathbf{ q}_{12} ) . \n\\end{eqnarray}\n\n\n\nIn Fig.~\\ref{fig:Pkratio_1LoopNumerical}, we show the 1-loop power spectrum of $\\mb{ \\Psi} $, normalized by the tree level ZA power spectrum. As $\\mb{ \\Psi} $ is a vector, there are numerous ways to present its power spectrum. Here we show\n\\begin{equation}\nP(k) = \\sum_{i} P_{ii} (k) ,\n\\end{equation}\nbecause it is rotationally invariant and coordinate-independent. \n\n\nAt high redshift the loop correction terms are negligible, they however become important at low redshifts. In particular we note that the contribution of $P^{13} $, which arises from 3LPT, is much more significant than $P^{22}$, which appears in 2LPT. At $z=0$, $P^{13} $ is of 10\\% of the ZA power spectrum at $k=0.1 \\, \\mathrm{Mpc}^{-1} \\, h $, while $P^{22}$ is only 1\\% at this scale. As we will see later on, including $P^{13} $, the agreement with the numerical $\\mb{ \\Psi} $ is much improved at the weakly nonlinear regime, although it quickly causes more rapid deviation from the numerical results due to the onset of shell crossing in the weakly nonlinear regime. \n\n\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{Pkratio_1LoopNumerical.pdf}\n\\caption{ The 1-loop corrections to the power spectrum of $\\mb{ \\Psi} $ at different scale factors $a=0.01, \\, 0.33, \\, 0.5$ and 1 (from left to right). The 2LPT power spectrum (solid, blue) include $P^{22}$, while the 3LPT power spectrum (solid, green) further includes $P^{13}$. Numerical power spectrum of $\\Psi $ that generated by 1LPT (dashed, red), 2LPT (dashed, cyan) and 3LPT (dashed, violet) are also shown. They are normalized with respect to the ZA power spectrum $P^{11}$. }\n\\label{fig:Pkratio_1LoopNumerical}\n\\end{figure*} \n\n\nWe plot the vector power spectrum $P^{33 \\rm v} $ and $P^{11}$ for comparison in Fig.~\\ref{fig:PkPsiV_2Loop}. Although the two-loop contribution grows much faster than the ZA power spectrum. At $z=0 $, $ P^{33 \\rm v} $ is only 1\\% of the magnitude of the ZA power spectrum at $k = 1 \\, \\, \\mathrm{Mpc}^{-1} \\, h $. Thus the vector contribution from LPT to the power spectrum of $\\mb{ \\Psi} $ is small. However, we shall see in Sec.~\\ref{sec:NumericalResults} that at late time a much larger amount of the curl part is generated in small scales due to shell crossing. This non-perturbative effect is not captured by LPT.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{PkPsiV_2Loop.pdf}\n\\caption{ In the upper panel, the lowest order vector contribution to the power spectrum of $\\mb{ \\Psi} $, $P^{33 \\rm v}$ (solid) and the ZA power spectrum (dashed) are plotted. In the lower panel, the ratio $P^{33 \\rm v} \/ P^{11} $ is shown. Two redshifts are shown: $z=0$ (blue) and $z=1$ (red). }\n\\label{fig:PkPsiV_2Loop}\n\\end{figure} \n\n\n\n\\subsubsection{Generating Lagrangian displacement in simulations }\nWe will also generate the dark matter density field using LPT. Here we briefly review the procedures to generate the displacement field using LPT \\cite{ Bouchetetal1995, Buchertetal1994, Scoccimarro98}. Up to second order, the displacement field in LPT is potential. In third order, it acquires a curl part. However, we see in the previous section that it does not contribute to the power spectrum of $\\mb{ \\Psi} $ at the 1-loop order. Thus we shall neglect the curl part, and the 3LPT displacement can be written in terms of the displacement potentials as \n\\begin{equation}\n\\mathbf{ \\mb{ \\Psi} }_{\\rm 3LPT} = \\nabla ( D_1 \\phi^{(1)} + D_2 \\phi^{(2)} + D_{\\rm 3a} \\phi^{(3 \\rm a)} + D_{\\rm 3b} \\phi^{(\\rm 3 b)} ). \n\\end{equation}\nThe LPT growth factors can be written in terms of the linear growth factor $D$ as\n\\begin{eqnarray}\nD_1 &=& - D , \\\\\nD_2 &=& - \\frac{3}{7} D^2, \\\\\nD_{\\rm 3a} &=& - \\frac{1}{3} D^3, \\\\\nD_{\\rm 3b} &=& - \\frac{10}{21} D^3. \n\\end{eqnarray}\nThe displacement potentials are obtained by solving the following Poisson equations:\n\\begin{eqnarray}\n\\nabla^2 \\phi^{(1)} & =& \\delta_0 , \\\\\n\\nabla^2 \\phi^{(2)} & =& -\\frac{1}{2} \\mathcal{G}_2 ( \\phi^{(1)} , \\phi^{(1)} ) , \\\\\n\\nabla^2 \\phi^{(3a)} & =& \\det ( \\nabla_{ij} \\phi^{(1)} ), \\\\\n\\nabla^2 \\phi^{(3b)} & =& -\\frac{1}{2} \\mathcal{G}_2 ( \\phi^{(1)} , \\phi^{(2)} ) , \n\\end{eqnarray}\nwhere $\\mathcal{G }_2 (\\phi^{(a)}, \\phi^{(b)} ) $ denotes\n\\begin{equation}\n\\mathcal{G }_2 (\\phi^{(a)}, \\phi^{(b)} ) \\equiv \\sum_{i,j} \\big(\\nabla_{ij}\\phi^{(a)} \\nabla_{ij}\\phi^{(b)} \\big) - \\nabla^2 \\phi^{(a)} \\nabla^2 \\phi^{(b)} .\n\\end{equation}\n\n\nIn Fig.~\\ref{fig:Pkratio_1LoopNumerical}, we also show the power spectrum of $\\mb{ \\Psi} $ obtained using the displacement field generated using 1LPT, 2LPT, and 3LPT, and they agree with the ZA and the loop corrections pretty well. \n\nWe would like to comment that in the power spectrum from 3LPT catalogs, in addition to the 1-loop contributions, there is also the 2-loop scalar contribution $P^{33 \\rm s } $. The good agreement between the 1-loop calculations and the results from 3LPT catalogs imply that the effects of $P^{33 \\rm s } $ are negligible. This is in stark contrast to the standard perturbation theory, in which the individual contributions of the higher order loop terms give even more sizeable contribution than the lower order ones, although the total contributions are small due to large cancellations among the individual terms. \n\n\n\\section{Numerical results} \n\\label{sec:NumericalResults}\n\nIn $N$-body simulation, since we know both the initial position $\\mathbf{q}$ and the final position $\\mathbf{x}$ of the particles, we can easily extract $\\mb{ \\Psi} $ using Eq.~\\ref{eq:PsiDefinition}. After getting $\\mb{ \\Psi} $, we can obtain the scalar and vector potentials by solving Eq.~\\ref{eq:Poisson_Phi} and \\ref{eq:Poisson_A} respectively. To compute the source $\\nabla \\cdot \\mb{ \\Psi} $ and $\\nabla \\times \\mb{ \\Psi} $, we can either compute them using finite difference (FD) method in real space or using spectral derivative by means of Fast Fourier Transform (FFT) in Fourier space. In Appendix \\ref{sec:TestCases}, we test the FD and FFT methods with some test cases, and we find that the FFT method performs better than the FD. Thus we shall use FFT method throughout this paper. With the scalar and vector potentials, we can obtain the scalar and vector parts of $ \\mb{ \\Psi} $. Again we take the derivatives with the FFT method. Another way to obtain the vector part of the field is simply to subtract the scalar part from the input field. We shall use both methods as crosschecks, and abbreviate the one obtained from vector potential as Vector and the one obtained by subtracting the scalar part from the input field as Input $-$ Scalar. We shall see that both methods yield very similar results.\n\n\nIn the literature there have been measurements of scalar and vector part of the velocity field \\cite{BernardeauWeygaert1996,WeygaertSchaap, PueblasScoccimarro2009,Zhengetal2013}. A major difficulty in velocity measurement is that it is sampled by discrete point particles. If the velocity field is obtained by interpolating the velocity of the particles to a grid, one would get a mass-weighted field rather than a volume-weighted one, \\textit{i.e.}, one obtains momentum instead of velocity. In the void region, the velocity is not necessarily small although there are few particles available for interpolation. Various methods have been developed to cope with this problem, such as the Delaunay tessellation method \\cite{BernardeauWeygaert1996,WeygaertSchaap, PueblasScoccimarro2009}. However, for the measurement of $\\mb{ \\Psi} $, since it is defined at all grid points, we do not have this sparse sampling problem. \n\n\n\nAs mentioned in Sec.~\\ref{sec:HelmholtzPsiGeneral}, using the basis vectors Eq.~\\ref{eq:Helicity0}--\\ref{eq:HelicityMinus}, the fields are decomposed into the scalar (helicity-0 component) and the vector (helicity-$+$ and helicity-$-$ components) automatically. We will also use this method as a crosscheck. \n\n\n\nBefore presenting the numerical results, we shall first outline the details of the simulation used in this paper. In the simulation, there are $1024^3$ particles. Two box sizes are used, 1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $ and 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $. One realization for the 1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box and three realizations for the 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ one. The cosmology is a flat $\\Lambda$CDM model, with the WMAP 7 cosmology parameters adopted \\cite{WMAP7}, \\textit{i.e.}, $\\Omega_{\\rm m} = 0.272$, $\\Omega_{\\Lambda}=0.728$, $\\Omega_{\\rm b} = 0.0455$, and $\\sigma_8=0.81$. Thus for the large box, each particle carries a mass of $2.37 \\times 10^{11} \\, M_{\\odot} h^{-1} $ and $1.10 \\times 10^{9} \\, M_{\\odot} h^{-1} $ for the small box. The large box enables us to probe the large scale mode. On the other hand, as shell crossing is a small scale phenomenon, the small box simulation with better mass and spatial resolution will enable us to capture its effect more accurately. The initial condition is Gaussian with spectral index being 0.967. The transfer function is output from CAMB \\cite{CAMB} at redshift 99. The initial particle displacements are implemented using 2LPT \\cite{CroccePeublasetal2006}. The simulation is done using Gadget2 \\cite{Gadget2}. See \\cite{Biagetietal2013} for more details. \n\n\n\\subsection{Numerical helicity power spectrum of $ \\mb{ \\Psi} $ }\n\\label{sec:PkPsiNumerical}\nWe show in Fig.~\\ref{fig:vec_field} the sections of the vector fields projected to the $x-y$ plane for the original $\\mb{ \\Psi} $, its scalar component, the vector components obtained by solving the Poisson equation (Vector) and by subtracting the scalar components from the original field (Input $-$ Scalar). We also show the Eulerian positions of the particles. First, the original input field is almost visually identical to its scalar component at all redshifts shown. The vector component is much smaller, and do not have large-scale coherent component. We note that the vector component fluctuates in signs at small scales, this qualitatively agrees with \\cite{PichonBernardeau,PueblasScoccimarro2009}. The vector components obtained in two different methods result in almost identical field pattern. In Appendix \\ref{sec:TestCases}, we also find that these methods give very similar reconstruction results. By comparing the Eulerian positions of the particles with the plot of the vector part of $\\mb{ \\Psi} $, it is clear that the vector parts are generated in the high density region where caustics form. \n\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{vec_field_EulPos_250.png}\n\\caption{ Sections of the vector fields. The fields are obtained from the 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box simulation. In each section, we show the projection of $\\mb{ \\Psi} $ onto the $x-y$ plane. The foot of the arrow locates at the initial position $\\mathbf{q} $. The size of each section is 200 by 200 $( \\, \\mathrm{Mpc} \\, h^{-1} )^2$. The columns correspond to the original $\\mb{ \\Psi} $ measured from numerical simulation, the scalar component of $\\mb{ \\Psi} $, the vector component obtained by solving for the vector potential, the vector field obtained by subtracting the scalar part from the original field and the Eulerian position of the particles (from left to right). Different rows are for $z=2$, 1 and 0 respectively (from top to bottom). The displacement fields are to the scale for Input and Scalar, but blown up by a factor of 5 for Vector and Input - Scalar. }\n\\label{fig:vec_field}\n\\end{figure*} \n\n\nWe now turn to the power spectrum to study the decomposition more quantitatively. In Fig.~\\ref{fig:PkPsi_SRatio}, we compare power spectrum of $\\mb{ \\Psi} $ of the original field, the power spectrum of its scalar part and, also the 2LPT and 3LPT loop power spectrum. At $z=99$, the initial conditions are set by 2LPT, which is completely potential, and indeed, the vector component is consistent with zero. \n\nAt large scales, the results from the two boxes agree, however, at low redshifts, the small box yields higher power than the large one in small scales. At $k \\sim 1 \\, \\mathrm{Mpc} \\, h^{-1} $, the smaller box results gives more than 10\\% higher power than the large one. The smaller box with better mass and spatial resolution, it measures the effects of shell crossing more accurately. Thus we shall trust the small box results in the large $k$ regime. We add an arrow to indicate the scale below which the large box results agree with the small box ones, and hence it is reliable. This scale is about 0.3 $ \\, \\mathrm{Mpc}^{-1} \\, h $. For the smaller box, the increase in power in small scales is due to aliasing. We also add an arrow as a rough guide to indicate the scale, above which aliasing could be significant. \n\n\n The input field and its scalar component have the same power spectrum at large scales and deviation between them occurs only for large $k$ at low redshift. At redshift 0, we find that the original field have higher power by 10\\% at $k\\sim 1 \\, \\, \\mathrm{Mpc}^{-1} \\, h $ than the scalar component. Thus the scalar component is still the dominant contribution at large scales even after shell crossing. It is interesting to note that although the overall magnitude of the power spectrum of the original field and its scalar mode differs at this scale for the two box sizes, the ratio between the original field and its scalar mode agrees quite well. \n\n\n At low redshifts, 3LPT gives higher power than both ZA and 2LPT and it agrees with the numerical power spectrum better at the weakly nonlinear regime. However, 3LPT keeps on shooting up, while the numerical power spectrum turns over due to shell crossing. In Fig.~\\ref{fig:PkPsi_SRatio}, we first see a bump and then a sharp drop in power indicates that the scale that the nonlinear higher order corrections becomes important is larger than the shell crossing scale. We also note that the turn-over scale decreases with time. At $z=0 $, the scale is around $0.1 \\, \\, \\mathrm{Mpc}^{-1} \\, h $. As the shell crossing scale increases, the effects of higher order LPT corrections only cause more rapid deviation from numerical results in the weakly nonlinear regime. \n\n\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{PkPsi_SRatio.pdf}\n\\caption{ The power spectrum of $\\mb{ \\Psi} $ from simulation, its scalar part, the 2LPT (solid, violet) and 3LPT (solid, yellow) results. The power spectrum $\\mb{ \\Psi} $ from simulation (Inp) from two box sizes are shown, 1500 (dashed, blue) and 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ (dashed, green). The scalar component (S) of $\\mb{ \\Psi} $ from the 1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $ (dotted-dashed, red) and 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ (dotted-dashed, cyan) boxes are also shown. The 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box is averaged over three realizations, but the error bar is not shown for clarity. The subplots from left to right are for $a=0.01$, 0.33, 0.5 and 1. At low redshifts, around $k\\sim 1 \\, \\mathrm{Mpc}^{-1} \\, h $, the 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box, which has better resolution, yields about 10\\% higher power than the larger box. For each set of simulations, an arrow is added to suggest the scale above which there could be numerical artifacts. }\n\\label{fig:PkPsi_SRatio}\n\\end{figure*} \n\nAs we mentioned earlier, we can check the accuracy of the decomposition by further breaking down the field in the helicity basis. For the scalar part, there should be no helicity-$\\pm$ components, and the amount of the residual suggests the accuracy of the algorithm. In Fig.~\\ref{fig:PkPsi_S_SErr}, we plot the components of the numerical scalar part in the helicity basis. The results are indeed dominated by the helicity-$0$ part. However, there is a small amount of helicity-$+$, but its value is six orders of magnitude smaller than the signal. We do not show the helicity-$-$ part because it is identical to the helicity-$+$ one as expected from symmetry. Also note that the helicity-$+$ residuals from the two boxes do not overlap, suggesting this may arise from numerical artifacts. \n \n\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{PkPsi_S_SErr.pdf}\n\\caption{ The numerical scalar components of $\\mb{ \\Psi} $ are further broken down into helicity components, the helicity-0 component (solid line, S0) and helicity-$+$ component (dashed line, S+). Results from two boxes are shown: 1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $ (blue) and 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ (red). For the scalar components, there should be no helicity-$+$ parts, and the amount of helicity-$+$ components present suggests the accuracy of the numerical algorithm. The error, \\textit{i.e.},~the helicity-$+$ component is six orders of magnitude smaller than the signal. Note that the helicity-$-$ components are not shown as they are identical to the helicity-$+$ ones. }\n\\label{fig:PkPsi_S_SErr}\n\\end{figure*} \n\n\n\nWe now go on to look at the vector part of $\\mb{ \\Psi} $ more carefully. As it is a small quantity, we first try to identify possible spurious numerical artifacts. As for the case of the scalar component of $\\mb{ \\Psi} $, we will further break it down into the helicity components as a sanity check. In Fig.~\\ref{fig:PkPsi_V_VErr}, we display the various components of the vector components of $\\mb{ \\Psi} $. At $z=99$, the initial conditions are set by 2LPT, so there should be no vector components of $\\mb{ \\Psi} $ only. Thus the powers we see are errors. We note that there is some constant residual vector power spectrum. For the 1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box, its magnitude is about $10^{-11} ( \\, \\mathrm{Mpc} \\, h^{-1} )^5$, while for the 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box, its magnitude is much smaller, about $10^{-15} ( \\, \\mathrm{Mpc} \\, h^{-1} )^5 $. Recall that for the scalar components of $\\mb{ \\Psi} $ in Fig.~\\ref{fig:PkPsi_S_SErr}, the results are much less sensitive to the box sizes. For the vector part the smaller boxes yield much more accurate results than the smaller one. \n\n\nAs redshift decreases, the helicity-$+$ part develops a bump at small scales for $k \\sim 0.5 \\, \\mathrm{Mpc}^{-1} \\, h $. For this bump, the results from both boxes agree with each other. However, the results from the large box suggests that the power spectrum goes up as $k$ decreases for $k < 0.02 \\, \\mathrm{Mpc}^{-1} \\, h $. This part of the spectrum is in fact time independent. Because we expect that the vector power spectrum is generated by shell crossing at small scales, this feature must not be physical. Furthermore, at $z=2$, we note that the helicity-$+$ component from the smaller box differs from the larger box one and keeps on decreasing as $k$ decreases. The vector components sensitively depend on the mass resolution, \\textit{i.e.} the mass of the particle in the simulation. The smaller box has much better mass resolution than the larger one. When the mass resolution is poor, the vector components are spuriously enhanced. Similar results are also found in the context of vorticity \\cite{PueblasScoccimarro2009}. Therefore we will not consider the first trough at large scales from the large box any further. \n\nThe two different methods of obtaining the vector components yield very similar results, except for the large box at the largest scales. Also the error of the decomposition, \\text{i.e.}, the helicity-$0$ components are generally small, about six orders of magnitude smaller than the signal. However, the error increases rapidly for the large box for $k \\lesssim 0.02 \\, \\mathrm{Mpc}^{-1} \\, h $. All these suggest that the results from the large box at the largest scales are not reliable. \n\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{PkPsi_V_VErr.pdf}\n\\caption{ The vector components of $\\mb{ \\Psi} $ obtained from vector potential (V, blue and green) and subtracting the scalar components from the input field (ImS, red and yellow) are further decomposed into helicity components as a sanity check. The signal is the helicity-$+$ part (dashed), while the amount of helicity-$0$ (solid) component suggests the accuracy of the algorithm. The smaller box (250 $ \\, \\mathrm{Mpc} \\, h^{-1} $) yields more accurate results than the large one (1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $) due to much better mass resolution. \n }\n\\label{fig:PkPsi_V_VErr}\n\\end{figure*} \n\n\n\n\n\n\n\n\n\n\nWe display the helicity-$+$ power spectrum of $\\Psi$ for various redshifts in Fig.~\\ref{fig:PkV_P33v_scaling}. We show results from both the 1500 and 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ boxes. However, as we discussed above, there are potential spurious artifacts in the vector power spectrum at the largest scales in the large box simulation. For the sake of clarity, we have removed the data points at the largest scales in the 1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box. Those removed data points show an increasing trend as $k$ decreases, and they collapse to the same line at the large scales. In the intermediate scales, $k \\sim 0.06 -0.4 \\, \\, \\mathrm{Mpc}^{-1} \\, h $, where both boxes have good resolution, they agree with each other. Around $k\\sim 2 \\, \\, \\mathrm{Mpc}^{-1} \\, h $, the large box gives higher power than the small box, and this is due to aliasing in the power spectrum of the large box simulation. \n\n\nThe helicity-$+$ contribution from LPT, $P^{33 \\rm v} \/2 $, is shown in the upper panel in Fig.~\\ref{fig:PkV_P33v_scaling} for comparison. At $z=2 $, the signal detected is more than an order of magnitude greater than the LPT contribution for $k > 0.4 \\, \\mathrm{Mpc}^{-1} \\, h $, while at large scales, the signal is closer to the LPT results. This is consistent with the picture that a significant amount of vector contribution is generated in shell crossing at small scales. As redshift decreases, the power at large scales grows more rapidly than the LPT results. At $z=0$, the vector contribution from LPT is an order of magnitude smaller than the signal detected for $k\\lesssim 1 \\, \\mathrm{Mpc}^{-1} \\, h $. \n\n\nWe measure the scaling of the part of the vector power spectrum before the turn-over as a function of time. As the large box suffers numerical issues at large scales, we only fit to the small box results, up to $k=0.1 \\, \\, \\mathrm{Mpc} \\, h^{-1} $. We find that the large scale vector power spectrum can be fitted by a power law $ D^n$, where the best fit is $n=9.5$, with 1 $\\sigma $ error bar [9.2, 10.0]. In the lower panel of Fig.~\\ref{fig:PkV_P33v_scaling}, we also show the vector power spectrum obtained by scaling the one from the 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box at $z=0$ using the best fit $n$. It is clear that the turn-over moves to larger scale as redshift decreases.\n\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{PkV_P33v_scaling.pdf}\n\\caption{ The helicity-$+$ power spectrum of $\\mb{ \\Psi} $. The set of curves are for $z=2$ (blue), 1.5 (red), 1 (green), 0.5 (cyan) and 0 (yellow), respectively (from bottom to top). The solid curves are from 1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box and the dashed ones are from 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ one. In the upper panel, we show the power spectrum from the vector contribution in LPT $P^{33 \\rm v} \/ 2 $ (dotted-dashed). In the lower panel, the dotted-dashed curves are obtained by scaling the measurement from the 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box at $z=0$ by the factor $D^{9.5}$. Those data points at the large scales from the 1500 $ \\, \\mathrm{Mpc} \\, h^{-1} $ simulation with potential spurious artifacts have been removed for clarity. }\n\\label{fig:PkV_P33v_scaling}\n\\end{figure} \n\n\nIn \\cite{PueblasScoccimarro2009}, the vorticity power spectrum of the velocity field is measured (Fig.~3 in \\cite{PueblasScoccimarro2009}). Vorticity in the velocity field is also generated by shell crossing. To compare the vector power spectrum of $\\mb{ \\Psi} $ with the vorticity power spectrum in \\cite{PueblasScoccimarro2009}, it is useful to clarify the relation and difference between them first. The vorticity $\\mathbf{w} $ is defined as\n\\begin{equation}\n\\mathbf{w} = \\frac{ \\nabla_{\\mathbf{x}} \\times \\mathbf{u} }{f \\mathcal{H} }, \n\\end{equation}\nwhere $f=d \\ln D \/ d \\ln a $, $\\mathcal{H} = d \\ln a \/ d \\tau $ and $\\mathbf{u} $ is the comoving velocity $ d \\mathbf{x} \/ d \\tau$. Note that the derivative is with respect to the Eulerian coordinate $\\mathbf{x}$. The quantity that is analogous to $\\mathbf{w} $ is $\\nabla \\times \\mb{ \\Psi} $. The power spectrum of $\\nabla \\times \\mb{ \\Psi} $ is given by \n\\begin{equation}\n\\sum_{i=j} \\langle \\nabla \\times \\mb{ \\Psi} (\\mathbf{k}_1 )_i \\nabla \\times \\mb{ \\Psi} (\\mathbf{k}_2 )_j \\rangle = k_1^2 \\sum_{i=j} \\langle \n \\mb{ \\Psi} _i(\\mathbf{k}_1 ) \\mb{ \\Psi} _j (\\mathbf{k}_2 ) \\rangle. \n\\end{equation}\nThus we should multiply by the vector power spectrum of $\\mb{ \\Psi} $ by $k^2$ when comparing with the vorticity power spectrum. One key difference between $\\mb{ \\Psi} $ and velocity is that $\\mb{ \\Psi} $ is always defined at the Lagrangian position $\\mathbf{q}$, while velocity is defined at the Eulerian position $ \\mathbf{x} $. Another key difference is that the velocity field is the derivative of $\\mb{ \\Psi} $ at one instant of time, while $\\mb{ \\Psi} $ gives the cumulative effects over time. \n\n\nIn \\cite{PueblasScoccimarro2009}, it was found that when the mass of particle is large, the vorticity is artificially enhanced. Convergence in the vorticity power spectrum is achieved when the mass of particles is less than $10^9 \\, M_{\\odot} h^{-1}$ or so. This is similar to our finding that in the large box the vector power spectrum suffers numerical artifacts at large scales, while the small box with particles mass being $1.1 \\times 10^9 \\, M_{\\odot} h^{-1}$ seems to be free of numerical artifacts. Thus mass resolution plays an important role in the measurement of vorticity. \n\n\nAt $z=0$, we find that the vector power spectrum of $\\mb{ \\Psi} $ turns over at $k \\sim 0.2 \\, \\mathrm{Mpc}^{-1} \\, h $ in Fig.~\\ref{fig:PkV_P33v_scaling}. After multiplying by the factor of $k^2$, the turn-over occurs at $k \\sim 0.3 \\, \\mathrm{Mpc}^{-1} \\, h $, while the vorticity power spectrum turns over at $k \\sim 1 \\, \\mathrm{Mpc}^{-1} \\, h $. Another difference from \\cite{PueblasScoccimarro2009} is that the growth of the vorticity power spectrum at the largest scales was found to scale as $D^7 $. \n\n\nIn this section, we have measured the scalar and the vector components of $\\mb{ \\Psi} $. We find that the scalar power spectrum of $\\mb{ \\Psi} $ is suppressed due to shell crossing. Shell crossing also generated vector part of the power spetrum. However, the generated vector part is still much subdominat compared to the scalar one. Thus the scalar assumption is still valid even after the onset of shell crossing.\n\n\n\n\n\\subsection{Modifications of LPT} \n\\label{sec:ModificationLPT}\nIn this subsection, and partly in the next one, we shall examine two modifications of LPT. In the first approach, we shall incorporate the information that shell crossing causes power suppression in the power spectrum of $\\mb{ \\Psi} $ by modifying the displacement potential. Another approach is to combine LPT with the spherical collapse model \\cite{KitauraSreffen2012}. We shall test how good these phenomenological models perform by checking the density power spectrum at the end of this section. We will see that these approaches yield limited improvements after the onset of shell crossing. \n\n\nSince we know the power spectrum of $ \\mb{ \\Psi} $ after shell crossing, we may use this extra information to improve the LPT that used to construct halo catalogs. To do so we will fit the numerical power spectrum by multiplying a suppression factor to the LPT $P(k)$. The functional form we use is \n\\begin{equation}\n\\label{eq:LPTsPkfit}\nP_{\\rm LPTs} = \\frac{ 1 }{ 1 + \\alpha k^n } P_{\\rm LPT}, \n\\end{equation}\nwhere $P_{\\rm LPT} $ is LPT power spectrum, and $\\alpha $ and $n$ are free parameters. We call it LPTs and s denotes suppression. We propose to modify $\\mb{ \\Psi} _{\\rm LPT} $ to \n\\begin{equation}\n\\label{eq:LPTsPsi}\n\\mb{ \\Psi} _{\\rm LPTs} (\\mathbf{k} ) = \\frac{ 1 }{ \\sqrt{ 1 + \\alpha k^n } } \\mb{ \\Psi} _{\\rm LPT}. \n\\end{equation}\nIn practice, we generate the LPTs catalogs by multiplying the factor $ 1 \/ \\sqrt{ 1 + \\alpha k^n } $ to the LPT potential. This suppression factor serves to suppress the power in small scales. To some extent, the idea is similar to the truncated ZA \\cite{MelottPellmanetal1994}, in which the ZA displacement field is computed using the power spectrum with power beyond the nonlinear scale removed. \n\nThe functional form Eq.~\\ref{eq:LPTsPkfit} does not fit the numerical power spectrum of $\\mb{ \\Psi} $ in the whole range well. Our goal is to fit the large scale part as well as possible, for example up to $k\\sim 0.5 \\, \\mathrm{Mpc}^{-1} \\, h $, and we often find that the resultant fitting power spectrum underestimates the power in the high $k$ regime. We have tried a few simple functional forms, they show qualitative similar behaviors as Eq.~\\ref{eq:LPTsPkfit}. Worse still, it turns out that even Eq.~\\ref{eq:LPTsPkfit} provides a good fit to the numerical power spectrum, for example within 5\\% up to $k=0.5 \\, \\, \\mathrm{Mpc}^{-1} \\, h $. When the fitting formula is fed into Eq.~\\ref{eq:LPTsPsi} to generate the catalog numerically, we find that the power spectrum of $\\mb{ \\Psi} $ from the resulting catalog gives much bigger deviation than the fitting formula Eq,~\\ref{eq:LPTsPkfit}. This is not surprising given that shell crossing is a highly nonlinear process.\n\n\n We carry out the fitting using the 2LPT and 3LPT power spectrum.\nWe fit to the numerical results from the 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ box. For 2LPT, we find that for the redshifts available the data can be fitted by \n\\begin{equation}\n\\label{eq:2LPTs}\nn=1.8, \\quad \\ln \\alpha(z) = 1.3 + 4.6 \\ln D(z),\n\\end{equation}\nwhere $D(z)$ is the linear growth factor, and it is normalized such that it reduces to the scale factor in the matter-dominated era. The fitting power spectrum agrees with the numerical one within 10\\% up to $k=\n1 \\, \\, \\mathrm{Mpc}^{-1} \\, h $. The fit is not very good because the 2LPT power spectrum deviates from the numerical one at quite large scale. For the 3LPT power spectrum we find that across different redshifts, the numerical power spectrum can be fitted by \n\\begin{equation}\n\\label{eq:3LPTs}\nn=1.5, \\quad \\ln \\alpha(z) = 2.1 + 3.5 \\ln D(z). \n\\end{equation}\n\n \n\n\nWe use the best fit Eq.~\\ref{eq:2LPTs} and \\ref{eq:3LPTs} to generate the LPTs catalogs. The power spectrum of the displacement field from the LPTs catalog is shown in Fig.~\\ref{fig:Pkratio_LPTs_ALPT}. The suppression factor indeed brings the LPT power spectrum much closer to the simulation results. However, we note that the best fit Eq.~\\ref{eq:2LPTs} and \\ref{eq:3LPTs} fits the simulation results better than those shown in Fig.~\\ref{fig:Pkratio_LPTs_ALPT}. The deviation gets bigger as the redshift decreases.\n\n\nThe fitting formulas Eqs.~\\ref{eq:2LPTs} and \\ref{eq:3LPTs} are obtained for the standard cosmology parameters. The dependence on the cosmological parameters have not been checked, although the dependence may be expected to be weak as it is parametrized in terms of the linear growth factor. Also, it may be useful to stress that these formulas are obtained for $\\Lambda $CDM model only, it should be established again for other cosmological models. However, for our purpose here, we shall use these effective potentials to generate the mock catalogs and see how much we can improve relative to the standard LPT. We shall use the density power spectrum as the diagnostic. \n\n\n\n\n\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{Pkratio_LPTs_ALPT.pdf}\n\\caption{ The power spectrum of the $\\mb{ \\Psi} $ obtained from simulation of 1500 (green, solid) and 250 $ \\, \\mathrm{Mpc} \\, h^{-1} $ (blue, solid), 2LPT (red, dashed), 3LPT (cyan, dashed) and 2LPTs catalog (violet, solid) and 3LPTs catalog (yellow, solid). We also plot the power spectrum of $\\mb{ \\Psi} $ from ALPT (black, solid) obtained from the prescription Eq.~\\ref{eq:ALPT}. All are normalized with respect to the ZA power spectrum. }\n\\label{fig:Pkratio_LPTs_ALPT}\n\\end{figure*} \n\n\nThe second model that we shall examine in detail is a hybrid of LPT and the spherical collaspe model \\cite{KitauraSreffen2012}. Recently, there are some suggestions to improve LPT by reducing the shell crossing using spherical collapse approximation. References ~\\cite{Berbardeau1994, Mohayaeeetal2006} derived a simple evolution equation of Lagrangian volume based on spherical collapse approximation and \\cite{Neyrinck2012} found that it agrees well with simulations. But the spherical approximation underestimates the power at large scales. Ref.~\\cite{KitauraSreffen2012} then proposed to combine the LPT displacement with the spherical collapse displacement by splitting the displacement vector into large scale and small scale ones. The two regimes are separated by a filtering scale. When the displacement is smaller than the filtering scale, the displacement field is generated by LPT, while it is given by the spherical collapse displacement for the part that exceeds the filtering scales. The authors called it Augmented LPT (ALPT). Mathematically, it reads \\cite{KitauraSreffen2012}\n\\begin{equation}\n\\label{eq:ALPT}\n\\mb{ \\Psi} ( \\mathbf{k} ) = W(k,r_s) \\mb{ \\Psi} _{\\rm LPT}( \\mathbf{k} ) + [ 1 - W(k,r_s ) ] \\mb{ \\Psi} _{\\rm SC }( \\mathbf{k}). \n\\end{equation}\nIn \\cite{KitauraSreffen2012}, $W$ is chosen to be a Gaussian window \n\\begin{equation}\nW(k,r_s) = e^{- ( k r_s)^2 \/ 2 }, \n\\end{equation}\nand $r_s=3 \\, \\, \\mathrm{Mpc} \\, h^{-1} $ is found to give the best result. The Lagrangian displacement field $ \\mb{ \\Psi} _{\\rm LPT} $ is given by 2LPT and $ \\mb{ \\Psi} _{\\rm SC } $ is obtained from \n\\begin{equation}\n\\nabla \\cdot \\mb{ \\Psi} _{\\rm sc} = 3 \\Big[ ( 1 - \\frac{2}{3} D \\nabla^2 \\phi^{(1)} )^{1\/2} - 1 \\Big], \n\\end{equation}\nand if $1 - \\frac{2}{3} D \\nabla^2 \\phi^{(1)} $ is less than zero, the square root is set to zero. \n\n\n\n We now proceed to compare the density power spectrum obtained using various recipes with that from simulation. The LPT catalogs are produced and the particles are interpolated to the grid by the Cloud-in-Cell algorithm to compute the power spectrum. In Fig.~\\ref{fig:Pkdelta}, we compare the density power spectrum obtained from various approaches against the simulation results. Note that the random seed used in the simulation is different from the one in the LPT catalogs, and that is why their power is quite different at large scales. \n\n\nAt high redshift, $z=2$, higher order LPT performs better than the lower order one. 3LPT tracks the $N$-body results well and the power of 3LPT is within 1\\% from the $N$-body one within $k = 0.4 \\, \\mathrm{Mpc}^{-1} \\, h $ shown. LPTs does not give any better results than the standard LPT. 3LPTs in fact yields slightly lower power than 3LPT, 2LPTs gives almost the same results as 2LPT. The performance of ALPT is similar to 3LPT, although it gives slightly higher power than $N$-body results for $ k > 0.5 \\, \\mathrm{Mpc}^{-1} \\, h $.\n\n\nAs redshift decreases, the differences between LPT results and simulation widen. At the intermediate redshift, $z=1$, higher order LPT still outperforms the lower order one. 3LPT is still the best in the mildly nonlinear regime, the power is only 4\\% lower than the $N$-body results up to $k \\sim 0.25 \\, \\mathrm{Mpc}^{-1} \\, h $. As at $z=2$, 3LPTs yields slightly lower power than 3LPT, and 2LPTs performs very similar to 2LPT. We also note that all the LPT recipes cluster around a small strip for $ k > 0.6 \\, \\mathrm{Mpc}^{-1} \\, h $. ALPT yields slightly lower power than 3LPT in the intermediate regime as it is based on 2LPT, however, it gives higher power than 3LPT for $ k > 0.5 \\, \\mathrm{Mpc}^{-1} \\, h $. \n\n\n At $z=0$ almost all the LPT recipe results fall below the linear theory one. At the mildly nonlinear regime, for standard LPT, the higher the order of perturbation, the lower the power is. ZA gives higher power than 2LPT and 3LPT for $ k > 0.3 \\, \\mathrm{Mpc}^{-1} \\, h $. In the weakly nonlinear regime $ k\\sim 0.1 \\, \\mathrm{Mpc}^{-1} \\, h $, LPTs yields slightly higher power than LPT. For $k > 0.3 \\, \\mathrm{Mpc}^{-1} \\, h $. ALPT results in the highest power among all the LPT recipes for $ k > 0.1 \\, \\mathrm{Mpc}^{-1} \\, h $. \n\n\nFinally, we note that the scales that the LPT results deviate substantially from the simulation results in the density power spectrum is quite similar to that in the power spectrum of $ \\mb{ \\Psi} $. For example, 5 \\% deviation of the 3LPT results from the numerical one occur roughly around 0.3 (at $z=1$) and 0.1 $ \\, \\mathrm{Mpc}^{-1} \\, h $ (at $z=0$) in both cases. \n\n\n\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{Pkdelta.pdf}\n\\caption{ The density power spectrum from simulations (dotted-starred, black) and various LPT recipes: ZA (solid, blue), 2LPT (solid, red), 3LPT (solid, green), 2LPTs (dashed, cyan), 3LPTs (dashed, yellow) and ALPT (dotted-dashed, violet). The subplots are for $z=2$, 1, and 0 respectively (from left to right). All are normalized with respect to the linear density power spectrum. }\n\\label{fig:Pkdelta}\n\\end{figure*} \n\nAs far as the density power spectrum in the mildly nonlinear regime is concerned, various LPT recipes still fall short of the simulation results. Two of its variants examined here do not give any significant improvement for the power spectrum in the mildly nonlinear where standard LPT is known to break down due to severe shell crossing. In particular for the two variants of LPT considered here, information from $N$-body simulation has been used already. In LPTs, the effective potential is derived from the fitting formulas Eq.~\\ref{eq:2LPTs} and \\ref{eq:3LPTs}, while in ALPT, the primarily motivation is that the scatter plot of $ \\nabla \\cdot \\mb{ \\Psi} $ (see also the next subsection) from spherical collapse model agrees with simulations well \\cite{Neyrinck2012}. While the information is taken from some statistics in which some averaging is taken, given shell crossing is a highly nonlinear process, it may not be surprising that this effective approach would fail for some other statistics, such as the density power spectrum. This suggests that detailed modeling of the small scale physics is required in order to improve the standard LPT. \n\n\n\n\\subsection{ Scatter plot of $ \\nabla \\cdot \\mb{ \\Psi} $ }\n\\label{sec:ScatterDivPsi}\n\nAs we see previously the vector part of the displacement field is small, in this section we will focus on $ \\nabla \\cdot \\mb{ \\Psi} $, which captures all the information if $\\mb{ \\Psi} _{\\rm } $ is potential. For both $ \\mb{ \\Psi} _{\\rm fin } $ and $\\mb{ \\Psi} _{\\rm ini } $, the divergence is taken with respect to the Lagrangian coordinate $\\mathbf{q}$. In this section, we shall explore the information that can be obtained from the scatter plot between $ \\mb{ \\Psi} _{\\rm fin } $ and $\\mb{ \\Psi} _{\\rm ini } $.\n\nWe shall first examine various LPT recipes using the scatter plot of $ \\nabla \\cdot \\mb{ \\Psi} $. In Fig.~\\ref{fig:DivPsi_scatter_theory}, we plot $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $ at the initial time against $\\nabla \\cdot \\mb{ \\Psi} _{\\rm fin } $ at the final time, as in \\cite{Neyrinck2012}. We have multiplied $\\nabla \\cdot \\mb{ \\Psi} $ by the linear growth factor $D$ to bring them to $z=0$. \n\nWe compare the scatter plot for $\\nabla \\cdot \\mb{ \\Psi} $ obtained from simulation against those from the LPT recipes. For $\\nabla \\cdot \\mb{ \\Psi} $ from simulation, as redshift decreases, the scatter increases. The relation between $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini} $ and $\\nabla \\cdot \\mb{ \\Psi} _{\\rm fin} $ is linear in ZA. For higher order LPT, such as 2LPT and 3LPT, they depend on other derivatives of the deformation tensor as well, not just its trace, and so there are scatters in the relation. There is less scatter in all the LPT recipes than in the simulation. The mean relation is roughly quadratic for 2LPT and cubic for 3LPT \\cite{Neyrinck2012}. At low redshifts, these behaviors in the positive and negative ends of $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini} $ deviate from the simulation markedly. In LPTs, thanks to the suppression factor, the deviations from the simulation results in the ends are reduced, and so the agreement with simulations are improved. We also show the scatter obtained from ALPT. The scatter in ALPT follows the mean of the simulations closely. We also note that the scatter in ALPT is much reduced as in the spherical approximation only the trace of the deformation tensor appears. The spherical collapse model tracks the mean of the scatter plot well was in the original motivation for ALPT \\cite{Neyrinck2012,KitauraSreffen2012}. \n\n\n\nRef.~\\cite{Neyrinck2012} reported that there are some differences between the scatter plot constructed using the FFT and FD methods. In \\cite{Neyrinck2012}, the derivatives were done using the FD method, and found that there is an accumulation of points around $\\nabla \\cdot \\Psi_{\\rm fin} = -3$, where $\\Psi_{\\rm fin} $ is a physical displacement field without the linear extrapolation factor. \\cite{Neyrinck2012} also pointed out, when spectral derivatives are used, there is no saturation around $ -3 $. Given the better precision in reconstruction for the FFT method described in Appendix \\ref{sec:TestCases}, we use the spectral derivatives here. \n\n\n\\begin{figure*}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{DivPsi_scatter_theory.png}\n\\caption{The scatter plot between the initial $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $ and the final $\\nabla \\cdot \\mb{ \\Psi} _{\\rm fin } $. The simulation results are shown as yellow circles. The three columns correspond to $z=2$, 1, and 0, respectively (from left to right). On top of the simulation results, we show the corresponding scatter obtained from ZA, 2LPT, 3LPT, 2LPTs, 3LPTs, and ALPT (green dots, from top to bottom). Both $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $ and $\\nabla \\cdot \\mb{ \\Psi} _{\\rm fin } $ have been multiplied by the appropriate linear growth factor $D$ to bring them to $z$=0. }\n\\label{fig:DivPsi_scatter_theory}\n\\end{figure*} \n\nIn the rest of the section, we shall explore the information about the various kinds of objects in the scatter plot. In LPT, the Eulerian density is obtained from the mass conservation equation as \n\\begin{equation}\n1 + \\delta(\\mathbf{x} ) = \\frac{ 1 }{J } , \n\\end{equation}\nwhere $J$ is the Jacobian determinant \n\\begin{equation}\nJ = \\det \\Big( \\frac{ \\partial \\mathbf{x} }{ \\partial \\mathbf{q} } \\Big) .\n\\end{equation}\nIn ZA, $J$ is given by \n\\begin{equation}\nJ = ( 1 - D \\lambda_1 ) ( 1 - D \\lambda_2 ) ( 1 - D \\lambda_3 ), \n\\end{equation}\nwhere $\\lambda_i $ is the eigenvalue of the deformation tensor $\\nabla_{ij} \\phi^{(1)} $, and they are ordered such that $\\lambda_1 \\geq \\lambda_2 \\geq \\lambda_3 $. By examining the eigenvalues of the deformation tensor, one can classify the collapsed structures. The vanishing of the factor in $J$ implies that the axis associated with that eigenvalue has collapsed. We assume that all the cosmic structures can be classified into knots (3 collapsed axes), filaments (2 collapsed axes), sheets (1 collapsed axis) and voids (no collapsed axis). We can set some cuts on the eigenvalues of $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $ to select some objects, and explore how they distribute in the scatter plot. \n\n\nIt is important to point out that this kind of classification is based on the analysis at one scale only. As pointed out in \\cite{LeeShandarin1998} for the case of collapsed objects, this analysis suffers from the cloud-in-cloud problem in the Press-Schechter argument. That is the identified local structure may be hosted within the structure of another kind. Thus objects identified here may not agree with those from the more sophisticated identification algorithms (see for example \\cite{Cautunetal2013,TempelStoicaetal2013} and references therein). Indeed, to get the right abundance of collapsed objects, a large factor is required \\cite{LeeShandarin1998}. With this caveat in mind, we shall use this simple classification here. \n\n\n In Table \\ref{tab:CollaspedStructure}, we show the classification of the collapsed structure using the eigenvalues of $J$. This is based on the analysis of the field at the scale of the grid $\\sim 1.5 \\, \\mathrm{Mpc} \\, h^{-1} $. For example, the condition that $\\lambda_1 \\geq t = 1\/D $ means that at least 1 axis has not collapsed. These kinds of objects include filaments, sheets and voids. The probability distribution of the ordered $\\lambda_i$ for a Gaussian field has been calculated \\cite{Doroshkevich1970,LeeShandarin1998}. By integrating the probability distribution (Eq.~13, 14, and 15 in \\cite{LeeShandarin1998}) from the threshold $t$ to infinity we get the fractions in Table \\ref{tab:CollaspedStructure}. In Eq.~13, 14, and 15 in \\cite{LeeShandarin1998}, there is a free parameter $\\sigma$, the rms variance of the density. Using the $\\sigma$ obtained by computing the variance of $\\nabla^2 \\phi^{(1)} $ at the grid, we find that the fraction computed agrees with the direct measurements very well, within 0.5\\%. We note that more than 99\\% of the Lagrangian volume belongs to the group containing sheets, filaments and voids. That is the Lagrangian volume that collapse to form halos are less than 1 \\%. As redshift decreases, the fraction of cosmic voids decreases, while that of the sheets and filaments increases. From Table \\ref{tab:CollaspedStructure}, we deduce that at $z=0$, 0.8\\% of the Lagrangian volume forms knots, 15\\% forms sheets, 51\\% forms filaments and 33\\% forms voids. This is in line with visual expectation in the large scale structure that the cosmic web is dominated by filaments and voids. \n\n\n\n\n\\begin{table*}\n\\caption{ Fractions of the Lagrangian volume that form various large scale structures. Classification of collapsed objects based on the eigenvalues, $\\lambda_i $, of the deformation tensor. $t$ is the threshold $1\/ D$. }\n\\label{tab:CollaspedStructure}\n\\begin{ruledtabular}\n\\begin{tabular}{ |l|l|l|l| }\n & sheets, filaments and voids & filaments and voids & voids \\\\ \n & $ \\lambda_1 \\geq t $ & $ \\lambda_1 \\geq \\lambda_2 \\geq t$ & $ \\lambda_1 \\geq \\lambda_2 \\geq \\lambda_3 \\geq t$ \\\\\n\\hline \n $z=2$ & 0.9999 & 0.990 & 0.79 \\\\\n $z=1$ & 0.9986 & 0.946 & 0.56 \\\\\n $z=0$ & 0.9917 & 0.842 & 0.33 \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\n\n\n\n In Fig.~\\ref{fig:DivPsi_scatter_V}, we show the scatter plot of $\\mb{ \\Psi} $ for voids on top of the full simulation results. The voids mostly distributed around the positive side $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $. As the redshift decreases, the fraction decreases and they move towards the more positive end of $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $. Since voids are the region that has not yet collapsed, one may think that they undergo less shell crossing than some arbitrary region. This idea is borne out in Fig.~\\ref{fig:DivPsi_scatter_V}. The scatter in the full simulation decreases as $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $ increases, and the void region corresponds to the positive end of $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $. \n\n \n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{DivPsi_scatter_V.png}\n\\caption{ The scatter plot of voids (red) on top of the simulation results (blue). The cyan line corresponds to the 1LPT result. \n }\n\\label{fig:DivPsi_scatter_V}\n\\end{figure} \n\nIn Fig.~\\ref{fig:DivPsi_scatter_HF}, we show a similar plot for knots and filaments. These collapsed structures mainly distribute around the negative end of $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $. As redshift decreases, the region expands from the negative $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $ end to the positive side. These collapsed objects are virialized and have undergone more shell crossing. They show less dependence on the initial $\\nabla \\cdot \\mb{ \\Psi} _{\\rm ini } $, as manifested with larger scatter.\n\n \n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[ width=\\linewidth]{DivPsi_scatter_HF.png}\n\\caption{ The scatter plot of knots and filaments (red) on top of the simulation results (blue). The cyan line corresponds to the 1LPT result.\n }\n\\label{fig:DivPsi_scatter_HF}\n\\end{figure} \n\n\\section{Conclusions}\n\\label{sec:Conclusions}\nLagrangian displacement field $\\mb{ \\Psi} $ is the central object in LPT. LPT is very successful at high redshifts, but it performs poorly at low redshifts due to severe shell crossing. After shell crossing, the standard LPT breaks down. \n\nIn order to gain insight into $\\mb{ \\Psi} $ when shell crossing is not negligible, we measure $\\mb{ \\Psi} $ from $N$-body simulation directly in this paper. As $\\mb{ \\Psi} $ is potential in LPT to a very good approximation, and shell crossing can generate non-negligible amount of vorticity, we decompose $\\mb{ \\Psi} $ into scalar and vector parts. We use the power spectrum of $\\mb{ \\Psi} $ to quantify the effect of shell crossing. We find that at large scales, the numerical results agree well with 1-loop LPT calculations. However, shell crossing becomes important at low redshifts, and the agreement deteriorates quickly. At $z=1$, the 1-loop power spectrum of $\\mb{ \\Psi} $ is about 10\\% higher than the results from numerical $\\mb{ \\Psi} $ at around $k\\sim 0.3\\, \\, \\mathrm{Mpc}^{-1} \\, h $, and it occurs at $k\\sim 0.1 \\, \\, \\mathrm{Mpc}^{-1} \\, h $ at $z=0$. This is consistent with the more well-known results that the LPT density power spectrum at low redshifts yields much lower power than the $N$-body results due to serious shell crossing. \n\n\n We also detect the generation of the vector mode due to shell crossing, although its magnitude is still much smaller than the scalar mode in the mildly nonlinear regime. Our results show that the potential approximation is still good even when shell crossing is non-negligible in the mildly nonlinear regime. Note that there is vector contribution in third order LPT. The leading contribution from the vector part of LPT to the power spectrum of $\\mb{ \\Psi} $ is of 2-loop order. We find that this 2-loop contribution is much smaller than the signal we detected. For example, at $z=1$, the vector power spectrum of $\\mb{ \\Psi} $ contributes to 10\\% of the total power spectrum at $k \\sim 2.5 \\, \\, \\mathrm{Mpc}^{-1} \\, h $, and this happens at $ k \\sim 1 \\, \\, \\mathrm{Mpc}^{-1} \\, h $ at $z=0$. The LPT contribution at these scales are about an order of magnitude smaller than the signal detected. Also the scaling of the large scale vector power spectrum is found to scale as $ D^{9.5} $, while the LPT vector contribution is expected to scale as $D^6$. \n\n\nWe examined the standard LPT recipes and two of its variants. In one of the variants, we incorporate the information of the power spectrum of $\\mb{ \\Psi} $ to improve the generation of catalogs with LPT. We include a power suppression factor to the displacement potential, and the functional form of the suppression factor is obtained by fitting to the numerical power spectrum of $\\mb{ \\Psi} $. The suppression factor can reduce the deviation from simulations in the void and overdense regions as can be seen from the scatter plot between $\\nabla \\cdot \\Psi_{\\rm ini} $ and $\\nabla \\cdot \\Psi_{\\rm fin} $. We used the density power spectrum, which is one of the most important physical observables, to gauge the performance of LPT and its variants. However, various LPT recipes still yield power much lower than simulations at redshift close to 0. The LPT variants yield limited success compared to the standard ones after the onset of shell crossing, even though some information from $N$-body simulation has been incorporated in the variant of LPT. This is not very surprising given that shell crossing is a highly nonlinear process. The information is obtained by taking the average for some statistics, it is not guaranteed that other statistics, such as the density power spectrum, will be right. Our exercises indeed suggest that this is not the case. To improve the standard LPT, this points to the need for more detailed modeling beyond the simple phenomenological approach.\n\n\n\n\\section*{Acknowledgment} \nI thank Vincent Desjacques, Cornelius Rampf, Roman Scoccimarro and Xin Wang and the anonymous referee for commenting on the draft of the paper. I also thank Vincent Desjacques for providing the simulation data used in this work. This work is supported by the Swiss National Science Foundation. \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe notion of Bach tensor was first introduced by Rudolf Bach in 1921 (see \\cite{Bach}) when studying the so-called \\emph{conformal gravity}. That is, instead of using the Hilbert-Einstein functional, one consider the functional\n$$\\mathcal{W}(g) = \\int_{M^4} |W (g)|^2 dv_g$$\non $4$-dimensional manifolds. The corresponding critical points of this functional are characterized by the vanishing of certain symmetric $2$-tensor $B_g$. The tensor $B_g$ is usually referred as Bach tensor and the metric is called \\emph{Bach-flat}, if $B_g$ vanishes. \\\\\n\nLet $(M^n,g)$ be an $n$-dimensional Riemannian manifold ($n\\geq 4$). The \\emph{Bach tensor} is defined to be\n\\begin{align}\nB_{jk} = \\frac{1}{n-3} \\nabla^i \\nabla^l W_{ijkl} + W_{ijkl}S^{il},\n\\end{align}\nwhere \n\\begin{align}\nS_{jk} = \\frac{1}{n-2} \\left(R_{jk} - \\frac{1}{2(n-1)} R g_{jk}\\right)\n\\end{align}\nis the Schouten tensor.\\\\\n\nUsing the \\emph{Cotton tensor} \n\\begin{align}\nC_{ijk} = \\nabla_i S_{jk} - \\nabla_j S_{ik}\n\\end{align}\nand the relation\n\\begin{align}\n\\nabla^l W_{ijkl} = (n-3) C_{ijk},\n\\end{align}\nwe can extend the definition of Bach tensor such that it can be defined for $3$-dimensional manifolds:\n\\begin{definition}\nFor any $n\\geq 3$, the Bach tensor is defined to be\n\\begin{align}\nB_{jk} = \\nabla^i C_{ijk} + W_{ijkl}S^{il}.\n\\end{align}\nWe say a metric is \\emph{Bach-flat}, if its Bach tensor vanishes.\\\\\n\\end{definition}\n\nTypical examples of Bach flat metrics are Einstein metrics and locally conformally flat metrics. Due to the conformal invariance of Bach-flatness on $4$-manifolds, metric conformal to Einstein metrics are also Bach-flat. For $4$-dimensional manifolds, it also includes half-locally conformally flat metrics. In general, Tian and Viaclovsky studied the module space of $4$-dimensional Bach-flat manifolds (cf. \\cite{T-V_1, T-V_2}). Besides these known \"trivial\" examples, there are not many examples known about generic Bach-flat manifolds so far. In fact, in some particular situations, one would expect rigidity phenomena occur.\\\\\n\nIn \\cite{Kim}, Kim shows that on a complete non-compact $4$-dimensional Bach-flat manifold $(M, g)$ with zero scalar curvature and positive Yamabe constant has to be flat, if the $L^2(M,g)$-norm of its Riemann curvature tensor is sufficiently small. This result can be easily extended to any dimension $n \\geq 3$. \\\\\n\nKim's proof is based on a classic idea that one can get global rigidity from local estimates: applying the ellipticity of Bach-flat metric, the Sobolev's inequality and together the smallness of $||Rm||_{L^2(M,g)(M, g)}$, one can get the estimate\n\\begin{align*}\n||Rm||_{L^4(B_r(p), g)} \\leq \\frac{C}{r}||Rm||_{L^2(M,g)(M, g)}\n\\end{align*}\nfor any fixed $p\\in M$ and $r > 0$. Now the conclusion follows by letting $r \\rightarrow \\infty$.\\\\\n\nThis method can also be used in various problems, for example, see \\cite{Chen}. However, note that the assumption of non-compactness is essential here. One can not get the rigidity by simply letting $r \\rightarrow \\infty$, when the manifold is compact without boundary for instance.\\\\\n\nIs it possible for us to have a result similar to Kim's but on closed manifolds? Here by \\emph{closed manifolds}, we mean compact manifolds without boundary. In fact, Singer proved that even dimensional closed positive Einstein manifolds with non-vanishing Euler characteristic have to be locally spherical, provided the $L^{\\frac{n}{2}}$-norm of its Weyl tensor is small (cf. \\cite{Singer}). As a special case of Bach-flat metric, this result suggests that this phenomenon might occur in a larger class. \\\\\n\nApplying a global estimate for symmetric $2$-tensors (see Proposition \\ref{prop:ineq_symmetric_2_tensor_est}), we can prove the following result:\n\\newtheorem*{thm_A}{\\bf Theorem A}\n\\begin{thm_A}\\label{thm:sphere_thm_Bach_flat_L^infty}\nSuppose $(M^n, g)$ is a closed Bach-flat Riemannian manifold with constant scalar curvature\n$$R_g = n(n-1).$$\nIf\n\\begin{align}\n||W||_{L^\\infty(M, g)} + ||E||_{L^\\infty(M, g)} < \\varepsilon_0 (n):=\\frac{n-1}{4}, \n\\end{align}\nthen $(M, g)$ is isometric to a quotient of the round sphere $\\mathbb{S}^n$.\n\\end{thm_A}\n\n\\begin{remark}\nNote that in Theorem A we do not assume the Yamabe constant is uniformly positively lower bounded. This assumption will be needed in Theorem B. It is equivalent to the existence of a uniform Sobolev's inequality (see section 4), which was applied frequently in the proof of Theorem B.\n\\end{remark}\n\nAnother one by assuming integral conditions:\n\\newtheorem*{thm_B}{\\bf Theorem B}\n\\begin{thm_B}\\label{thm:sphere_thm_Bach_flat_L^{n\/2}}\nSuppose $(M^n, g)$ is a closed Bach-flat Riemannian manifold with constant scalar curvature\n$$R_g = n(n-1).$$ Assume that there is a constant $\\alpha_0$ such that its Yamabe constant satisfies that\n\\begin{align}\nY(M, [g]) \\geq \\alpha_0 > 0.\n\\end{align}\nThen $(M,g)$ is isometric to a quotient of the round sphere $\\mathbb{S}^n$, if\n\\begin{align}\\label{presumption:tau_0}\n||W||_{L^{\\frac{n}{2}}(M,g)} + ||E||_{L^{\\frac{n}{2}}(M,g)} < \\tau_0 (n, \\alpha_0):= \\frac{3\\alpha_0}{32n(n-1)}.\n\\end{align}\n\\end{thm_B}\n\n\\begin{remark}\nBach-flat metrics is one of the typical examples of the so-called \\emph{critical metrics} (cf. \\cite{T-V_1}). By replacing the presumption \\emph{Bach-flatness} with \\emph{harmonic curvature}, which refers to the vanishing of Cotton tensor when the scalar curvature is a constant, the corresponding version of Theorem A and B are still valid without any essential difficulty.\n\\end{remark}\n\nIn particular, applying Theorem B for $4$-dimensional manifolds, we can partially recover the well-known $4$-dimensional conformal sphere theorem by Chang-Gursky-Yang (cf. \\cite{C-G-Y}; for a generalization see \\cite{C-Z}):\n\\newtheorem*{thm_C}{\\bf Theorem C}\n\\begin{thm_C}\\label{thm:sphere_thm_Bach_flat_L^{n\/2}}\nSuppose $(M^4, g)$ is a closed Bach-flat Riemannian manifold. Assume that there is a constant $\\alpha_0$ such that its Yamabe constant satisfies that\n\\begin{align}\nY(M, [g]) \\geq \\alpha_0 > 0.\n\\end{align}\nThen $(M,g)$ is conformal to the round sphere $\\mathbb{S}^4$ or its canonical quotient $\\mathbb{R}P^4$, if\n\\begin{align}\n\\int_{M^4} |W_g|^2 dv_g < \\frac{32}{3} \\pi^2 (\\chi(M^4) - 2) + \\frac{\\alpha_0}{192}.\n\\end{align}\n\\end{thm_C}\n\n\\begin{remark}\nIt was shown in \\cite{C-G-Y} that $(M^4, g)$ is conformal to $(\\mathbb{C}P^2, g_{FS})$ or a manifold covered isometrically by $S^1\\times S^3$ endowed with the canonical product metric, if we assume\n\\begin{align}\n\\int_{M^4} |W_g|^2 dv_g = 16 \\pi^2 \\chi(M^4) \n\\end{align}\ninstead.\n\\end{remark}\n\n\\paragraph{\\textbf{Acknowledgement}}\n\nThe author would like to express their appreciations to Professor Huang Xian-Tao for his interests in this problem and inspiring discussions.\\\\\n \n \n\n\\section{$\\theta$-Codazzi tensor and related inequality}\nWe define a concept which generalizes the classic \\emph{Codazzi tensor}:\n\\begin{definition}\nFor any $\\theta \\in \\mathbb{R}$, we say a symmetric $2$-tensor $h \\in S_2(M)$ is a $\\theta$-Condazzi tensor if\n\\begin{align}\nC_\\theta (h)_{ijk} := \\nabla_i h_{jk} - \\theta \\nabla_j h_{ik} = 0.\n\\end{align}\nIn particular, $h$ is referred to be a \\emph{Codazzi tensor} or \\emph{anti-Codazzi tensor} if $\\theta = 1$ or $\\theta = -1$ respectively.\n\\end{definition}\n\nThe motivation for us to define this notion is the following identity associated to it:\n\\begin{lemma}\\label{lem:theta_Codazzi_identiy}\nSuppose $(M, g)$ is a closed Riemannian manifold with constant scalar curvature\n$$R_g = n(n-1)\\lambda.$$\nThen for any $h \\in S_2(M)$ and $\\theta \\in \\mathbb{R}$,\n\\begin{align}\n &\\int_M \\left(|\\nabla h|^2 - \\frac{1} {1 + \\theta^2} |C_\\theta (h)|^2 \\right) dv_{g} \n \\\\\n =& \\frac{2 \\theta} {1 + \\theta^2}\\int_M \\left[ |\\delta h|^2 + W(\\overset{\\circ}{h}, \\overset{\\circ}{h}) + \\frac{2}{n-2}(tr h) E\\cdot h - \\frac{n}{n-2} tr (E \\times h^2) - n \\lambda |\\overset{\\circ}{h}|^2 \\right] dv_{g} ,\\notag\n\\end{align}\nwhere $\\overset{\\circ}{h} := h - \\frac{1}{n}(tr h) g$ is the traceless part of the tensor $h$.\n\\end{lemma}\n\n\n\\begin{proof}\nWe have \n\\begin{align*}\n&\\int_M \\nabla_i h_{jk} \\nabla^j h^{ik} dv_{g}\\\\\n=& -\\int_M \\nabla_j\\nabla_i h_k^{\\ j} h^{ik} dv_{g}\\\\\n=&-\\int_M ( \\nabla_i \\nabla_j h_k^{\\ j} + R_{j i l}^j h^{\\ l}_k - R_{j i k}^l h_l^{\\ j}) h^{i k} dv_{g}\\\\\n=&-\\int_M ( - \\nabla_i (\\delta h)_k + R_{i l} h^{\\ l}_k - R_{j i k l} h^{j l}) h^{ik} dv_{g}\\\\\n=&\\int_M \\left[|\\delta h|^2 - \\left( E_{i l} h^{\\ l}_k + (n-1) \\lambda g_{il} h^{\\ l}_k \\right) h^{ik}\\right] dv_{g} \\\\\n&+ \\int_M \\left( W_{jikl} + \\frac{2}{n-2}(E_{jl}g_{ik} - E_{jk}g_{il}) + \\lambda ( g_{jl} g_{ik} - g_{jk } g_{il} ) \\right) h^{jl}h^{ik} dv_{g}\\\\\n=& \\int_M \\left[|\\delta h|^2 + W (h, h) + \\lambda ( (tr h)^2 - n |h|^2 ) + \\frac{2}{n-2}(tr h) E\\cdot h - \\frac{n}{n-2} tr (E \\times h^2)\\right] dv_{g}\\\\\n=& \\int_M \\left[|\\delta h|^2 + W(\\overset{\\circ}{h}, \\overset{\\circ}{h}) + \\frac{2}{n-2}(tr h) E\\cdot h - \\frac{n}{n-2} tr (E \\times h^2) - n \\lambda |\\overset{\\circ}{h}|^2 \\right] dv_{g}.\n\\end{align*}\nThus for any $\\theta \\in \\mathbb{R}$,\n\\begin{align*}\n& \\int_M |C_\\theta (h)|^2 dv_g\\\\\n=& \\int_M |\\nabla_i h_{jk} - \\theta \\nabla_j h_{ik}|^2 dv_{g}\\\\\n=& \\int_M \\left[ (1 + \\theta^2)|\\nabla h|^2 - 2 \\theta \\nabla_i h_{jk}\\nabla^j h^{ik}\\right] dv_{g}\\\\\n=& \\int_M \\left[(1 + \\theta^2)|\\nabla h|^2 - 2 \\theta \\left(|\\delta h|^2 + W(\\overset{\\circ}{h}, \\overset{\\circ}{h}) + \\frac{2}{n-2}(tr h) E\\cdot h - \\frac{n}{n-2} tr (E \\times h^2) - n \\lambda |\\overset{\\circ}{h}|^2 \\right) \\right] dv_{g}.\n\\end{align*}\nThat is,\n\\begin{align*}\n &\\int_M \\left(|\\nabla h|^2 - \\frac{1} {1 + \\theta^2} |C_\\theta (h)|^2 \\right) dv_{g} \n \\\\\n =& \\frac{2 \\theta} {1 + \\theta^2}\\int_M \\left[ |\\delta h|^2 + W(\\overset{\\circ}{h}, \\overset{\\circ}{h}) + \\frac{2}{n-2}(tr h) E\\cdot h - \\frac{n}{n-2} tr (E \\times h^2) - n \\lambda |\\overset{\\circ}{h}|^2 \\right] dv_{g} .\n\\end{align*}\n\\end{proof}\n\nFrom this, we get the following inequality:\n\\begin{proposition}\\label{prop:ineq_symmetric_2_tensor_est}\nSuppose $(M, g)$ is a closed Riemannian manifold with constant scalar curvature\n$$R_g = n(n-1)\\lambda.$$\nThen for any $h \\in S_2(M)$ and $\\theta \\in \\mathbb{R}$,\n\\begin{align}\n \\int_M |\\nabla h|^2 dv_{g} \n \\geq & \\frac{2 \\theta} {1 + \\theta^2}\\int_M \\left[ |\\delta h|^2 + W(\\overset{\\circ}{h}, \\overset{\\circ}{h} ) + \\frac{2}{n-2}(tr h) E\\cdot h- \\frac{n}{n-2} tr (E \\times h^2) - n\\lambda|\\overset{\\circ}{h}|^2 \\right]dv_{g},\n\\end{align}\nwhere equality holds if and only $h$ is a $\\theta$-Codazzi tensor.\n\\end{proposition}\n\nIn particular, we have\n\\begin{corollary}\\label{cor:sym_tensor_est}\nSuppose $(M, g)$ is a closed Riemannian manifold with constant scalar curvature\n$$R_g = n(n-1)\\lambda.$$\nThen the traceless part of Ricci tensor satisfies \n\\begin{align}\n \\int_M |\\nabla E|^2 dv_{g} \n \\geq & \\frac{2 \\theta} {1 + \\theta^2}\\int_M \\left[ W(E, E ) - \\frac{n}{n-2} tr E^3 - n\\lambda|E|^2 \\right]dv_{g},\n\\end{align}\nIn particular when $\\theta = 1$, the equality holds if and only if $g$ is of harmonic curvature.\n\\end{corollary}\n\n\\begin{proof}\nBy the second Bianchi identity, we can easily see that \n$$\\delta E = - \\frac{n-2}{2n} d R_g = 0.$$\nNote that $tr E = 0$, thus the conclusion follows.\n\nWhen $\\theta = 1$, $E$ is a Codazzi tensor if and only if the Cotton tensor vanishes:\n$$C_{ijk} = \\frac{1}{n-2}\\underset{i,j}{Alt}\\left( \\nabla_i R_{jk} - \\frac{1}{2(n-1)} g_{jk}\\nabla_i R\\right) = 0.$$\n\\end{proof}\n\n\n\\section{$L^\\infty$-sphere theorem}\n\nWe can rewrite the Bach tensor in terms of traceless Ricci tensor:\n\\begin{lemma}\\label{lem:Bach_expression}\nThe Bach tensor can be expressed as follow\n\\begin{align}\nB_g =& \\frac{1}{n-2}\\Delta_g E - \\frac{1}{2(n-1)} \\left( \\nabla^2_g R - \\frac{1}{n} g \\Delta_g R\\right) +\\frac{ 2}{n-2} \\overset{\\circ}{W} \\cdot E \\\\\n\\notag &- \\frac{n}{ (n-2)^2} \\left( E\\times E - \\frac{1}{n}|E|^2 g \\right) - \\frac{R}{(n-1)(n-2)} E,\n\\end{align}\nwhere $ (\\overset{\\circ}{W} \\cdot E)_{jk} := W_{ijkl}E^{il}$.\n\\end{lemma}\n\n\\begin{proof}\nBy definition,\n\\begin{align*}\n\\nabla^i C_{ijk} &= \\nabla^i ( \\nabla_i S_{jk} - \\nabla_j S_{ik} )\\\\\n&= \\Delta_g S_{jk} - ( \\nabla_j \\nabla_i S^i_k + R_{ijp}^i S_k^p - R_{ijk}^p S_p^i )\\\\\n&= \\Delta_g S_{jk} - \\nabla_j \\nabla_k tr S - (Ric \\times S)_{jk} + (\\overset{\\circ}{Rm} \\cdot S)_{jk},\n\\end{align*}\nwhere we used the fact\n$$\\nabla_i S^i_k = \\nabla_k tr S$$\nby the contracted second Bianchi identity.\n\nSince\n$$S = \\frac{1}{n-2}E + \\frac{R}{2n(n-1)} g$$\nand\n$$Rm = W + \\frac{1}{n-2} E \\owedge g + \\frac{R}{2n(n-1)}g \\owedge g,$$\nthe conclusion follows by substituting them into \n$$B_{jk} = \\nabla^i C_{ijk} + W_{ijkl}S^{il}.$$\n\\end{proof}\n\nAs the first step, we show the metric has to be Einstein under given presumptions:\n\\begin{proposition}\\label{prop:Bach_flat_Einstein}\nFor $n\\geq 3$, there exists a constant $\\Lambda_n > 0$ only depends on $n$, such that any closed Bach flat Riemannian manifold $(M^n, g)$ with constant scalar curvature\n$$R_g = n(n-1)$$ and \n$$||W_g||_{L^\\infty(M, g)}+ ||E_g||_{L^\\infty(M, g)} < \\Lambda_n:= \\frac{n}{3}$$\nhas to be Einstein.\n\\end{proposition}\n\n\\begin{proof}\nSince the scalar curvature $R_g$ is a constant, by Lemma \\ref{lem:Bach_expression},\n\\begin{align*}\nB_g = \\frac{1}{n-2}\\Delta_g E +\\frac{ 2}{n-2} \\overset{\\circ}{W} \\cdot E - \\frac{n}{ (n-2)^2} \\left( E\\times E - \\frac{1}{n}|E|^2 g \\right) - \\frac{n}{n-2} E = 0.\n\\end{align*}\nThat is,\n\\begin{align*}\n\\Delta_g E +2\\overset{\\circ}{W} \\cdot E - \\frac{n}{ n-2} \\left( E\\times E - \\frac{1}{n}|E|^2 g \\right) - n E = 0.\n\\end{align*}\nThus,\n\\begin{align*}\n-E\\Delta_g E = 2 W (E, E) - \\frac{n}{ n-2} tr( E^3 ) - n|E|^2\n\\end{align*}\nand hence\n\\begin{align}\\label{eqn:int_nabla_E}\n\\int_M |\\nabla E|^2 dv_g = -\\int_M E\\Delta_g E dv_g = \\int_M \\left(2 W (E, E) - \\frac{n}{ n-2} tr( E^3 ) - n |E|^2 \\right) dv_g.\n\\end{align}\n\nOn the other hand, from Corollary \\ref{cor:sym_tensor_est},\n\\begin{align*}\n \\int_M |\\nabla E|^2 dv_{g} \n \\geq & \\frac{2 \\theta} {1 + \\theta^2}\\int_M \\left( W(E, E ) - \\frac{n}{n-2} tr E^3 - n|E|^2 \\right)dv_{g},\n\\end{align*}\nfor any $\\theta \\in \\mathbb{R}$.\n\nTherefore,\n\\begin{align}\\label{ineq:W(E,E)_|E|^2}\n\\frac{2(1 - \\theta + \\theta^2 )}{(1- \\theta)^2} \\int_M W(E, E ) dv_g \\geq \\frac{n }{n-2}\\int_M \\left( tr E^3 + (n-2)|E|^2 \\right)dv_{g}.\n\\end{align}\n\nSince\n$$\n\\int_M W(E, E ) dv_g \\leq ||W||_{L^\\infty(M, g)}\\int_M |E|^2 dv_g, \n$$\nby taking $\\theta = -1$, we get\n$$\n\\frac{n }{n-2}\\int_M \\left( tr E^3 + (n-2)|E|^2 \\right)dv_{g} \\leq \\frac{3}{2}||W||_{L^\\infty(M, g)}\\int_M |E|^2 dv_g.\n$$\nThat is,\n$$\n\\frac{n }{n-2}\\int_M tr E^3 dv_{g} \\leq \\left(\\frac{3}{2}||W||_{L^\\infty(M, g)} - n \\right)\\int_M |E|^2 dv_g.\n$$\n\nFrom the inequality \n$$\\int_M tr E^3 dv_{g} \\geq - \\int_M |E|^3 dv_g \\geq - ||E||_{L^\\infty(M, g)}\\int_M |E|^2 dv_g ,$$\nwe have\n$$\\left(\\frac{3}{2}||W||_{L^\\infty(M, g)} + \\frac{n}{n-2}||E||_{L^\\infty(M, g)} - n\\right)\\int_M |E|^2 dv_g \\geq 0.$$\nTherefore for any metric $g$ satisfies\n$$||W||_{L^\\infty(M, g)} + ||E||_{L^\\infty(M, g)} < \\Lambda_n:= \\frac{n}{3},$$\nwe have $E = 0$.\n\\end{proof}\n\nIt is well-known that the Weyl tensor satisfies an elliptic equation on Einstein manifolds (cf. \\cite{Singer}):\n\\begin{lemma}\\label{lem:Weyl_eqn_Einstein}\nLet $(M^n,g)$ be an Einstein manifold with scalar curvature\n$$R_g = n(n-1) \\lambda,$$\nthen its Weyl tensor satisfies \n\\begin{align}\\label{Weyl}\n\\Delta_g W - 2(n-1)\\lambda W - 2 \\mathcal{Q} (W) = 0,\n\\end{align}\nwhere $\\mathcal{Q} (W) := B_{ijkl} - B_{jikl} + \nB_{ikjl} - B_{jkil}$ is a quadratic combination of Weyl\ntensors with $B_{ijkl} := g^{pq}g^{rs} W_{pijr} W_{qkls}$.\\\\\n\\end{lemma}\n\nNow we finish this section by proving one of our main theorem:\n\\begin{proof}[Proof of Theorem A]\nWe take\n$$\\varepsilon_0 : =\\min \\{ \\Lambda_n, \\frac{n-1}{4} \\} = \\frac{n-1}{4}.$$\nFrom Proposition \\ref{prop:Bach_flat_Einstein}, we conclude that $g$ is an Einstein metric. Applying Lemma \\ref{lem:Weyl_eqn_Einstein}, we have\n\\begin{align*}\n- \\int_M \\langle \\Delta_g W - 2(n-1) W, W \\rangle dv_g = -2 \\int_M \\langle \\mathcal{Q}(W), W \\rangle dv_g \\leq 8 \\int_M |W|^3 dv_g.\n\\end{align*}\nThat is,\n\\begin{align}\\label{ineq:int_Weyl}\n\\int_M \\left( |\\nabla W|^2 + 2(n-1) |W|^2 \\right) dv_g \\leq 8 \\int_M |W|^3 dv_g.\n\\end{align}\n\nNow we have\n\\begin{align*}\n2(n-1)\\int_M |W|^2 dv_g \\leq 8 \\int_M |W|^3 dv_g \\leq 8 ||W||_{L^\\infty(M, g)} \\int_M |W|^2 dv_g.\n\\end{align*}\nThus the Weyl tensor vanishes, since $$||W||_{L^\\infty(M, g)} < \\varepsilon_0 =\\frac{n-1}{4}.$$\nTherefore, the metric $g$ is locally spherical.\n\\end{proof}\n\n\\section{$L^{\\frac{n}{2}}$-sphere theorem}\n\nLet $(M, g)$ be an Riemannian manifold. Suppose the Yamabe constant associated to it satisfies that\n$$\nY(M, [g]) := \\inf_{0 \\not\\equiv u \\in C^\\infty(M)} \\frac{\\int_M \\left(\\frac{4(n-1)}{n-2}|\\nabla u|^2 + R_g u^2 \\right)dv_g}{\\left(\\int_M u^{\\frac{2n}{n-2}} dv_g \\right)^{\\frac{n-2}{n}}} \\geq \\alpha_0 > 0.\n$$\nBy normalizing the scalar curvature such that $R_g = n(n-1)$, we get\n\\begin{align*}\n\\left(\\int_M u^{\\frac{2n}{n-2}} dv_g \\right)^{\\frac{n-2}{n}} \\leq& \\frac{1}{Y(M, [g])} \\int_M \\left(\\frac{4(n-1)}{n-2}|\\nabla u|^2 + R_g u^2 \\right)dv_g \\\\\n=& \\frac{n(n-1)}{Y(M, [g])} \\int_M \\left(\\frac{4}{n(n-2)}|\\nabla u|^2 + u^2 \\right)dv_g \\\\\n\\leq& \\frac{4n(n-1)}{3Y(M, [g])} \\int_M \\left(|\\nabla u|^2 + u^2 \\right)dv_g \\\\\n\\leq& \\frac{4n(n-1)}{3\\alpha_0} \\int_M \\left(|\\nabla u|^2 + u^2 \\right)dv_g \\\\\n\\end{align*}\nDenote $C_S:= \\frac{4n(n-1)}{3\\alpha_0} > 0$, we get the \\emph{Sobolev's inequality}\n\\begin{align}\\label{ineq:Sobolev}\n\\left(\\int_M u^{\\frac{2n}{n-2}} dv_g \\right)^{\\frac{n-2}{n}} \\leq C_S \\int_M \\left(|\\nabla u|^2 + u^2 \\right)dv_g\n\\end{align}\nNote that, the constant $C_S > 0$ only depends on $n$ and $\\alpha_0$ and is independent of the metric $g$.\n\n\n\\begin{lemma}\\label{lem:Bach_flat_L^{n\/2}_Einstein}\nLet $(M^n, g)$ be a Bach flat Riemannian manifold with constant scalar curvature\n$$R_g = n(n-1).$$ Suppose there is a constant $\\alpha_0$ such that its Yamabe constant satisfies that\n\\begin{align}\nY(M, [g]) \\geq \\alpha_0 > 0.\n\\end{align}\nThen $(M^n, g)$ is Einstein, if\n\\begin{align}\n||W||_{L^{\\frac{n}{2}}(M,g)} + ||E||_{L^{\\frac{n}{2}}(M,g)} < \\delta_0:= \\frac{\\alpha_0}{4n(n-1)} = \\frac{1}{3C_S}.\n\\end{align}\n\\end{lemma}\n\n\\begin{proof}\nFrom equation (\\ref{eqn:int_nabla_E}) and \\emph{H\\\"older's inequality},\n\\begin{align*}\n\\int_M |\\nabla E|^2 dv_g &= \\int_M \\left(2 W (E, E) - \\frac{n}{ n-2} tr( E^3 ) - n |E|^2 \\right) dv_g\\\\\n&\\leq \\left( 2||W||_{L^{\\frac{n}{2}}(M,g)} + \\frac{n}{ n-2}||E||_{L^{\\frac{n}{2}}(M,g)}\\right) ||E||^2_{L^{\\frac{2n}{n-2}}(M,g)} - n||E||^2_{L^2(M,g)} \\\\\n&\\leq 3 \\delta_0 ||E||^2_{L^{\\frac{2n}{n-2}}(M,g)} - n||E||^2_{L^2(M,g)}.\n\\end{align*}\nBy \\emph{Sobolev's inequality} (\\ref{ineq:Sobolev}) and the \\emph{Kato's inequality},\n$$||E||^2_{L^{\\frac{2n}{n-2}}(M,g)} \\leq C_S \\left(||\\nabla |E| ||^2_{L^2(M,g) }+ ||E||^2_{L^2(M,g)} \\right) \\leq C_S \\left(||\\nabla E ||^2_{L^2(M,g)} + ||E||^2_{L^2(M,g)} \\right).\n$$\nThus, we have\n\\begin{align*}\n||\\nabla E||^2_{L^2(M,g)}\\leq& 3 \\delta_0 C_S \\left(||\\nabla E ||^2_{L^2(M,g)} + ||E||^2_{L^2(M,g)} \\right) - n||E||^2_{L^2(M,g)} \\\\\n=& ||\\nabla E ||^2_{L^2(M,g)} - (n - 1) ||E||^2_{L^2(M,g)} .\n\\end{align*}\nTherefore, $E$ vanishes identically on $M$ and hence $(M,g)$ is Einstein.\n\\end{proof}\n\n\nNow we can show\n\\begin{proof}[Proof of Theorem B]\nFrom Lemma \\ref{lem:Bach_flat_L^{n\/2}_Einstein}, $(M,g)$ has to be Einstein. Now from \\emph{Sobolev's inequality} (\\ref{ineq:Sobolev}), \\emph{Kato's inequality} and inequality (\\ref{ineq:int_Weyl}), we have\n\\begin{align*}\n||W||^2_{L^{\\frac{2n}{n-2}}(M,g)} &\\leq C_S \\int_M \\left( |\\nabla |W||^2 + |W|^2 \\right) dv_g\n\\leq C_S \\int_M \\left( |\\nabla W|^2 + |W|^2 \\right) dv_g \n\\leq 8 C_S \\int_M |W|^3 dv_g .\n\\end{align*} \nApplying \\emph{H\\\"older's inequality},\n\\begin{align*}\n\\int_M |W|^3 dv_g \\leq ||W||_{L^{\\frac{n}{2}}(M,g)}||W||^2_{L^{\\frac{2n}{n-2}}(M,g)} \n\\end{align*}\nand hence\n$$\\left( 1 - 8C_S ||W||_{L^{\\frac{n}{2}}(M,g)}\\right)||W||^2_{L^{\\frac{2n}{n-2}}(M,g)} \\leq 0,$$\nwhich implies that $W$ vanishes identically on $M$ since\n$$||W||_{L^{\\frac{n}{2}}(M,g)} < \\tau_0:=\\frac{3\\alpha_0}{32n(n-1)} = \\frac{1}{8C_S}.$$\nTherefore, $(M, g)$ is isometric to a quotient of $\\mathbb{S}^n$.\n\\end{proof}\n\nAs for $n = 4$, we have\n\\begin{proof}[Proof of Theorem C]\nLet $\\hat g \\in [g]$ be the Yamabe metric, which means\n$$R_{\\hat g} \\left(Vol(M^4, \\hat g)\\right)^{\\frac{1}{2}} = Y(M^4, [g]).$$\nWe can also normalize it such that \n$$R_{\\hat g} = 12.$$\nAccording to the solution of \\emph{Yamabe problem}, \n$$Y(M^4, [g]) \\leq Y(\\mathbb{S}^4, g_{\\mathbb{S}^4}) = 12 \\cdot \\left( \\frac{8}{3} \\pi^2\\right)^{\\frac{1}{2}} = 8 \\sqrt{6} \\pi$$\nand hence\n$$Vol(M^4, \\hat g )\\leq Vol(\\mathbb{S}^4, g_{\\mathbb{S}^4}) = \\frac{8}{3} \\pi^2.$$\n\nFrom the \\emph{Gauss-Bonnet-Chern formula},\n\\begin{align}\n\\int_{M^4} \\left( Q_{\\hat g} + \\frac{1}{4} |W_{\\hat g}|^2 \\right) dv_{\\hat g} = 8\\pi^2 \\chi(M^4),\n\\end{align}\nwhere \n\\begin{align}\nQ_{\\hat g}:= - \\frac{1}{6} \\Delta_{\\hat g} R_{\\hat g} - \\frac{1}{2}|E_{\\hat g}|^2 + \\frac{1}{24} R_{\\hat g}^2\n\\end{align}\nis the \\emph{Q-curvature} for metric $\\hat g$. Thus,\n$$||E_{\\hat g}||^2_{L^2(M, \\hat g)} = \\frac{1}{2} ||W_{\\hat g}||^2_{L^2(M, \\hat g)} + 12 Vol(M^4, \\hat g) - 16 \\pi^2 \\chi(M^4) \\leq \\frac{1}{2} ||W_{\\hat g}||^2_{L^2(M, \\hat g)}+ 16 \\pi^2 ( 2 -\\chi(M^4))$$\nand hence\n\\begin{align*}\n||W_{\\hat g}||^2_{L^2(M, \\hat g)} + ||E_{\\hat g}||^2_{L^2(M, \\hat g)} &\\leq \\frac{3}{2} ||W_{\\hat g}||^2_{L^2(M, \\hat g)} + 16 \\pi^2 ( 2 -\\chi(M^4)) \\\\\n&= \\frac{3}{2} ||W_g||^2_{L^2(M,g)} + 16 \\pi^2 ( 2 -\\chi(M^4))\\\\\n& < \\frac{\\alpha_0}{128},\n\\end{align*}\nwhere we used the fact that $||W_g||_{L^2(M,g)}$ is conformally invariant for $4$-dimensional manifolds.\n\nOn the other hand, the metric $\\hat g$ is also Bach-flat, since Bach-flatness is conformally invariant for $4$-dimensional manifolds. Applying Theorem B to the Yamabe metric $\\hat g$, we conclude that $(M^4, \\hat g)$ is isometric to a quotient of the round sphere $\\mathbb{S}^4$. \n\nFor the quotient of an even dimensional sphere, only identity and $\\mathbb{Z}_2$-actions make it a smooth manifold. Therefore, $(M^4, g)$ is conformal to $\\mathbb{S}^4$ or $\\mathbb{R}P^4$ with canonical metrics.\n\\end{proof}\n\n\n\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMonte Carlo configuration interaction (MCCI)\\cite{MCCIGreer95,MCCIcodeGreer} offers the prospect of capturing many of the aspects of the full configuration interaction (FCI) wavefunction but using only a very small fraction of the configurations. The method repeatedly performs a configuration interaction calculation with a set of determinants which is enlarged by the addition of random single and double substitutions and reduced by the removal of states which have a coefficient in the wavefunction of magnitude less than a specified cut-off ($c_{\\text{min}}$). In principle the method can be applied to ground and excited states of single-reference or multi-reference systems.\n\nSingle-point energies have previously been calculated using MCCI,\\cite{MCCIGreer95} as have the bond dissociation energies of water and HF.\\cite{dissociationGreer} The errors of electronic excitation energies for atoms computed using MCCI were found to be small when compared with experiment in Ref.~\\onlinecite{excite1Greer}. Electronic excitation energies for small molecules have also been calculated using MCCI with errors of generally circa ten meV when compared with experiment yet using only a tiny fraction of the FCI space.\\cite{GreerMCCISpectra} Potential curves have been calculated for a variety of small systems using MCCI where it was found that non-parallelity errors approaching chemical accuracy when compared with FCI results could be produced using only a very small percentage of the FCI space even when the system was multireference.\\cite{MCCIpotentials} However the calculation of the curve for the very challenging system of fifty hydrogens presented difficulties for the method. Multipole moments, a non-variational quantity, have also been demonstrated to be calculated satisfactorily by MCCI for ground and excited states using only a very small fraction of the FCI space.\\cite{MCCImultipoleandIons} MCCI ionisation energies for atoms have been shown to compare favourably with FCIQMC and exact results, while electron affinities were more challenging with larger percentage errors but the absolute difference was not so poor.\\cite{MCCImultipoleandIons} Again the size of the MCCI space tended to be very small compared to the almost always computationally intractable size of the FCI space.\n\n\nIn this work we consider improving a MCCI calculation by using approximate natural orbitals or second-order perturbation theory. Although the Hartree-Fock molecular orbitals give the lowest energy single Slater determinant they may not be the most efficient choice for a CI calculation. The natural orbitals (NOs) are the eigenfunctions of the first-order reduced density matrix or one matrix \\cite{Lowdin55} which are considered to give better convergence than Hartree-Fock molecular orbitals. One possible benefit is that some natural orbitals may have eigenvalues (occupations) which are essentially zero so can be discarded and hence the size of the FCI space is reduced. For methods where a wavefunction is not easily available or defined, derivatives may be used to calculate the response or relaxed density matrix which can then be used to give an approximation to the natural orbitals. It has been demonstrated that the NOs are indeed optimal for a system of two electrons \\cite{LowdinShull56} but it is not clear if they are always the best choice for larger numbers of electrons and Ref.~\\onlinecite{bytautas:8217} suggests that split-localised orbitals may offer better convergence in larger systems. Approximate natural orbitals have been investigated as a possibly more efficient alternative to variationally optimising orbitals in CASSCF in Ref.~\\onlinecite{Abrams04} for a CASCI calculation. There it was found that potential curves, including the dissociation of ethylene, produced using CASCI with natural orbitals tended to usually have a non-parallelity error of only a few kcal\/mol compared with the CASSCF curve. The exception was the approximate natural orbitals from restricted MP2 which tended to perform poorly while the best results were achieved with CCSD approximate natural orbitals. Natural orbitals from CISD calculation for the ground-state of higher spin states have also been used for excited state MRCI calculations in Ref.~\\onlinecite{doi:10.1021\/ct200832u}. There excitation energies were found to have a difference of around 0.1 eV when compared with results using CASSCF\/MRCI.\n\n\nWe investigate the ability of approximate natural orbitals to improve the efficiency of an MCCI calculation. We use approximate natural orbitals from Quadratic CI with single and double substitutions (QCISD)\\cite{QCISD} or second-order M{\\o}ller-Plesset perturbation theory (MP2).\\cite{MP2} For a multireference system these methods may perform poorly or even fail to give sensible results with regards to the energy so we also consider approximate natural orbitals from an MCCI calculation.\nAs a fairer comparison than just a single energy calculation, we consider if MCCI with natural orbitals can offer improvements in accuracy and calculation time compared with standard MCCI for potential curves of water, carbon monoxide and the nitrogen molecule when using FCI as a benchmark.\n\n\nWe saw in Ref.~\\onlinecite{MCCIpotentials} that potential curves for small systems for which full configuration interaction results were available could generally be calculated to relatively high accuracy using MCCI. This was achieved with a very small fraction of the states and it is interesting to consider whether results can be improved by using the MCCI wavefunction as the starting point for a second-order perturbation calculation. To this end we adapt a second-order multireference perturbation method\\cite{HarrisonFCIperturbation} to work with MCCI and improve the efficiency of the removal of duplicate states. This method estimates the energy contribution from the neglected states in a MCCI wavefunction at the expense of the final energy not being variational nor being easily associated with a wavefunction. We test the assumption that the MCCI wavefunction will be a very good starting point for this perturbation so that MCCIPT2 should be able to produce a more accurate potential curve than MCCI, when both are compared with FCI, by accounting for more of the neglected dynamic correlation from a MCCI calculation.\n\\section{Methods}\n\n\\subsection{MCCI}\n\nThe algorithm\\cite{MCCIGreer95,MCCIcodeGreer} for MCCI is that the current MCCI wavefunction (usually initially comprising the occupied Hartree-Fock orbitals) has configuration state functions (CSFs) consisting of random single and double substitutions added to it. These substitutions are definitely attempted in CSFs with a coefficient of magnitude greater than a certain value while other CSFs have a $50\\%$ chance of a substitution occurring. The Hamiltonian matrix and overlap matrix are then constructed and the new wavefunction is found. Newly added states whose absolute value of coefficient is less than $c_{\\text{min}}$ are discarded and the process continues. Every ten iterations all states are considered for removal, not just newly added ones. This also occurs on the second last step and no states are added or removed on the final iteration.\n\nIn this work we also consider a version of MCCI with a modified behaviour for the removal\/addition of states, a convergence criterion and we also use Slater determinants (DETs) instead of CSFs for some computations, including the calculation of the MCCI NOs. When using DETs the MCCI wavefunction is not necessarily an eigenfunction of $\\hat{S}^{2}$ and more states may be required but the construction of the Hamiltonian matrix and first-order reduced density matrix is much simpler. We calculate the Hartree-Fock molecular orbital integrals using MOLPRO.\\cite{MOLPRO} In this work the initial MCCI wavefunction is the CSF or DET formed from the occupied restricted Hartree-Fock orbitals.\n\\subsection{Natural orbitals}\n\nThe first-order reduced density matrix or one matrix is defined as\n\\begin{eqnarray}\n\\nonumber \\gamma (\\vec{x}_{A},\\vec{x}_{B})=N\\int \\Psi^{*}(\\vec{x}_{A},\\vec{x}_{2},\\cdots,\\vec{x}_{N}) \\\\\n\\Psi(\\vec{x}_{B},\\vec{x}_{2},\\cdots,\\vec{x}_{N})d\\vec{x}_{2}\\cdots\\vec{x}_{N}\n\\end{eqnarray}\nwhich can be written in terms of the $M$ one-particle molecular orbitals as\n\\begin{equation}\n\\gamma (\\vec{x}_{A},\\vec{x}_{B})=\\sum_{i=1}^{M}\\sum_{j=1}^{M}\\phi_{i}^{*}(\\vec{x}_{A})\\bm{\\gamma}_{ij}\\phi_{j}(\\vec{x}_B).\n\\end{equation}\n We use MCCI with Slater determinants and a not too onerous number of iterations with the same $c_{\\text{min}}$ as the full MCCI calculation to construct approximate natural orbitals. We create the one matrix in the one-particle representation using the following method, beginning with $\\bm{\\gamma}=0$. We consider all the DETs forming the wavefunction. DETs $i$ and $j$ in maximum coincidence only contribute to $\\bm{\\gamma}$ if they either have no differences to give\n\\begin{equation}\n\\bm{\\gamma}_{mm}\\rightarrow\\bm{\\gamma}_{mm}+e_{p}c_{i}^{*}c_{j},\n\\end{equation}\nwhere $m$ runs over all orbitals in the DET, or one difference due to orbitals $k$ and $l$ which results in \n\\begin{equation}\n\\bm{\\gamma}_{kl}\\rightarrow\\bm{\\gamma}_{kl}+e_{p}c_{i}^{*}c_{j}.\n\\end{equation}\nHere $e_{p}$ is the sign due to putting the Slater determinants in maximum coincidence. We average over spins and then diagonalise the one matrix to give the MCCI natural orbitals. As we cannot be sure that a very small occupation is due to the approximation rather than being something that would occur in the FCI natural orbitals, then we include all the approximate natural orbitals. We recalculate the one and two-electron integrals using these approximate natural orbitals in MOLPRO\\cite{MOLPRO} then use them in a longer MCCI calculation using either DETs or CSFs.\n\nWe also consider the approximate natural orbitals from QCISD and MP2 calculations. QCISD can perhaps be thought of as a less complex approximation to coupled cluster singles and doubles (CCSD).\\cite{CCSD} In QCISD size consistency is introduced into a configuration interaction method but at the expense of the energy no longer being variational. MP2 uses the Hartree-Fock Hamiltonian as the zeroth-order approximation in a second-order perturbation to give an efficient way to account for some of the correlation. It is also size consistent but not variational.\nUsing MP2 or QCISD the one matrix may be approximated by the response one matrix to give approximate natural orbitals which we generate with MOLPRO.\\cite{MOLPRO} Although the natural orbitals we consider are approximate we shall refer to them as the natural orbitals from a certain method, e.g., MCCI natural orbitals.\n\n\n\n\\section{Single-point calculation using Natural orbitals}\nWe first consider carbon monoxide at its experimental equilibrium geometry\\cite{COdipoleExperiment} with a cc-pVDZ basis, $c_{\\text{min}}=5\\times10^{-4}$, and the two lowest energy MOs or two most occupied NOs frozen. We use 500 iterations of MCCI with Slater determinants, but we see in Fig.~\\ref{fig:NatorbCOiterations} that the calculations have essentially converged in much fewer iterations. The MCCI natural orbitals are calculated using a fifty iteration MCCI run.\n\n\\begin{figure}[ht]\\centering\n\\includegraphics[width=.45\\textwidth]{Fig1.eps}\n\\caption{MCCI ($c_{\\text{min}}=5\\times10^{-4}$) energy against number of iterations for CO at $R=2.1316$ Bohr with a cc-pVDZ basis set and with two frozen core orbitals when using either MOs, MP2 NOs, QCISD NOs or MCCI NOs.}\\label{fig:NatorbCOiterations}\n\\end{figure}\n\nThere is substantially faster convergence per iteration when using approximate natural orbitals here as can be seen in Fig.~\\ref{fig:NatorbCOiterations}. It appears that MP2 natural orbitals followed by those of QCISD and then MCCI all offer superior convergence to MOs here. However neither the time cost per iteration nor the overhead from calculating the natural orbitals is taken into account. One approach is to consider, to three decimal places, the highest final iteration energy and check how long it takes a calculation to first reach this energy or lower on the step after all states have been considered for removal. Here the final lowest energy ($-113.042$ Hartree) was from using QCISD NOs followed by MP2 NOs while the highest was for MOs ($-113.036$ Hartree). In Table \\ref{tbl:COeqTime} we display the time and number of DETs required to reach this energy. The time for the initial creation of the one and two electron integrals is not included as it is the same for each method. For MCCI NOs the integrals need to be recalculated and the time cost of this is included as is the QCISD and MP2 calculation time. We see that MP2 NOs took the least time while QCISD NOs used the fewest number of DETs. MCCI NOs were an improvement of the time and number of DETs compared with MOs but performed less well than the other two types of approximate natural orbitals we considered. As a system moves away from equilibrium it may be that the most efficient NOs are not from the same method. \n\n\\begin{table}[h]\n\\centering\n\\caption{Total time and number of DETs for CO to reach $E<-113.036$ Hartree on the step following when all states have been considered for removal.} \\label{tbl:COeqTime}\n\\begin{tabular*}{8.5cm}{@{\\extracolsep{\\fill}}lcc}\n\\hline\n\\hline\nOrbitals & DETs & Time (seconds) \\\\\n\\hline\n MOs & 9,919 & 248 \\\\\n MCCI NOs & 9,019 & 173 \\\\\n MP2 NOs & 7,453 & 115 \\\\\n QCISD NOs & 7,272 & 130 \\\\\n\\hline\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\n\n\n\nWe now consider the carbon dimer with no frozen orbitals and a bond length of $R=1.6$ angstroms. Here the system is moving away from the equilibrium geometry of $R\\approx 1.25$ angstroms. We use the 6-31G* basis set and $200$ iterations with a cut-off value of $5\\times10^{-4}$. In this system we see in Fig.~\\ref{fig:NatorbC2iterations} that the fastest improvement per iteration is when using MP2 NOs, then it appears that MCCI NOs followed by QCISD NOs improve on the convergence per iteration compared with MOs. However we find that the MP2 natural orbitals actually give the highest energy on the final step of $-75.628$ Hartree. \n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig2.eps}\n\\caption{MCCI ($c_{\\text{min}}=5\\times10^{-4}$) energy against number of iterations for C\\subscript{2} at $R=1.6$ angstroms with a 6-31G* basis set when using either MOs, MP2 NOs, QCISD NOs or MCCI NOs.}\\label{fig:NatorbC2iterations}\n\\end{figure}\n\nWe see in Table \\ref{tbl:C2eqTime} that now MP2 NOs produce the longest time to reach the specified energy but they still use fewer DETs than the canonical molecular orbitals. MCCI NOs now perform the best, with regards to time and number of states to reach this energy, but there is not much difference between the results using MCCI NOs and those using QCISD NOs.\n\n\n\n\\begin{table}[h]\n\\centering\n\\caption{Total time and number of DETs for C\\subscript{2} to first reach $E<-75.628$ Hartree on a step following the consideration of all states for removal.} \\label{tbl:C2eqTime}\n\\begin{tabular*}{8.5cm}{@{\\extracolsep{\\fill}}lcc}\n\\hline\n\\hline\nOrbitals & DETs & Time (seconds) \\\\\n\\hline\n MOs & 15,045 & 628 \\\\\n MCCI NOs & 11,755 & 377 \\\\\n MP2 NOs & 13,795 & 1134 \\\\\n QCISD NOs & 11,953 & 381 \\\\\n\\hline\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\nWe note that if we pick a high enough energy then MP2 natural orbitals may give the fastest convergence and, in addition, for a single calculation the stochastic nature of the algorithm could affect the order of the methods. So for a fairer comparison we will now consider potential curves. As this requires numerous single-point calculation then any random improvements or deteriorations in the speed of the calculation should average out and rather than considering an arbitrary energy as the target we will use a convergence criterion for each single-point calculation. In addition to possible faster calculations and fewer states this should enable us to see what, if any, improvement the use of natural orbitals produce in the accuracy of the potential curves.\n\n\n\n\\section{Potential energy curve comparison}\n\nWe now use CSFs in the main MCCI calculation, but we still approximate the MCCI natural orbitals using a DET MCCI calculation. We introduce a convergence check in that the calculation for each point is run until the maximum difference in the last three energies following steps where all states are considered for deletion is $10^{-3}$ Hartree. Furthermore the MCCI method here is such that no new states are added on any iteration following a step where all states have been considered for removal; previously this only occurred on the last iteration. This ensures that only the energies of wavefunctions where all states satisfy the cut-off requirement are being compared. We use twelve processors for the MCCI calculations except for the construction and diagonalization of the one matrix which is carried out in serial.\n\n\nWe quantify the accuracy of the potential curves when FCI results are available using the non-parallelity error (NPE)\\cite{li:1024NPE} and $\\sigma_{\\Delta E}$ (see below).\n\nThe NPE takes into account that a potential is defined only up to an additive constant so two curves differing only by a constant should have no error. This error is defined as \n\\begin{equation}\nNPE=\\max_{i} |E^{\\text{FCI}}_{i}-E^{\\text{approx}}_{i}|-\\min_{i} |E^{\\text{FCI}}_{i}-E^{\\text{approx}}_{i}|\n\\end{equation}\nwhere $i$ ranges over all $M$ considered points. \n\nOne possible problem with using the NPE is that two curves with the same maximum and minimum error will have the same NPE regardless of their accuracy for the rest of the points. We attempt to incorporate the accuracy of the other points by considering the mean squared value of the energy difference. Here the constant $c$, which a potential may be shifted by, is chosen to minimise the sum \n\\begin{equation}\nS=\\frac{1}{M}\\sum_{i=1}^{M}\\left( \\Delta E_{i} -c \\right)^{2} \n\\end{equation}\nwhere $\\Delta E_{i}=E^{\\text{FCI}}_{i}-E^{\\text{approx}}_{i}$. Setting $\\frac{\\partial S}{\\partial c}=0$ leads to \n\\begin{equation}\nc=\\frac{1}{M}\\sum_{i=1}^{M} \\Delta E_{i}=\\mu_{\\Delta E}\n\\end{equation}\nso\n\\begin{equation}\n\\min_{c} S=\\frac{1}{M}\\sum_{i=1}^{M}\\left( \\Delta E_{i} -\\mu_{\\Delta E} \\right)^{2} =\\sigma^{2}_{\\Delta E}. \n\\end{equation}\nThis suggests the variance of the difference in energies $\\sigma^{2}_{\\Delta E}$ as a way to quantify the fit of two potential curves that takes into account all the considered points and that the curves can be shifted by a constant without changing their physics. To give a quantity in units of energy we then use the standard deviation of $\\Delta E$: $\\sigma_{\\Delta E}$.\n\n\n\\subsection{H\\subscript{2}O}\n\n\n\nWe consider the potential curve for the double hydrogen dissociation of water at a bond angle of $104.5$ degrees with a cc-pVDZ basis and one frozen core. We generate the FCI results using MOLPRO\\cite{MolproFCI1,MolproFCI2,MOLPRO} and use a cut-off of $c_{\\text{min}}=10^{-3}$ in the MCCI calculations. $100$ iterations are used to produce the MCCI natural orbitals here. \n\n\n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig3.eps}\n\\caption{Energy (Hartree) against OH bond length $R$ (Bohr) for water in a cc-pVDZ basis with one frozen core using FCI, MCCI ($c_{\\text{min}}=10^{-3}$) with either MOs or QCISD NOs.}\\label{fig:H2OpotCurve}\n\\end{figure}\n\nIn Fig.~\\ref{fig:H2OpotCurve} we see that MCCI with MOs is very close to the FCI curve while when using QCISD NOs in MCCI there is a seemingly anomalous point at $R=4$ Bohr. This may be linked to the most occupied QCISD natural orbital having an occupation of greater than two ($2.27$ here) suggesting that the response one matrix is not a good approximation to the actual one matrix. Non physical occupations of the natural orbitals of the response one matrix of single reference methods has been suggested as a test for when multireference methods are required in Ref.~\\onlinecite{GordonNatOrbDiag}. Interestingly, to four decimal places the largest occupancy is physical for larger $R$ when using QCISD and we can see that the potential curve is again very close to that of the FCI. The same feature is present when using MP2 natural orbitals. \n\n\n\n\n\n\n\n Quantifying the accuracy of when using these NOs for the the whole curve would not be useful so we instead first display the results for the fifteen points with $R<4$ Bohr. We include the time necessary for the calculation of the natural orbitals. For all the results, the time for the recalculation of the integrals when using MCCI NOs and the QCISD and MP2 calculation time are all a very small fraction of the total time (less than a second in this case) and are approximately included by using the time for one appropriate geometry ($R=2$ Bohr for the first fifteen points) multiplied by the number of points considered. \n\\begin{table}[h]\n\\centering\n\\caption{Upper part considering the first fifteen points ($R<4$ Bohr), lower part considering all twenty points for water double hydrogen dissociation in a cc-pVDZ basis. NPE and $\\sigma_{\\Delta E}$ in kcal\/mol.} \\label{tbl:H2OFirst15andAll}\n\\begin{tabular*}{8.5cm}{@{\\extracolsep{\\fill}}lcccc}\n\\hline\n\\hline\nOrbitals & NPE & $\\sigma_{\\Delta E}$ & Mean CSFs & Time (s) \\\\\n\\hline\n MOs & 3.22 & 0.94 & 1507 & 1200 \\\\\n MCCI NOs & 4.18 & 1.23 & 1190 & 1491 \\\\\n MP2 NOs & 1.58 & 0.38 & 1124 & 795 \\\\\n QCISD NOs & 1.35 & 0.32& 1050 & 694 \\\\\n\\hline\n MOs & 3.22 & 0.92 & 1599 & 1856 \\\\\n MCCI NOs & 4.49 & 1.14 & 1147 & 2263 \\\\\n QCISD NOs\/MOs & 2.70 & 0.67 & 1256 & 1350 \\\\\n QCISD NOs\/MCCI NOs & 1.35 & 0.37 & 1042 & 1466 \\\\\n\\hline\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\nThe upper part of Table \\ref{tbl:H2OFirst15andAll} shows that QCISD NOs perform the best over the first fifteen points in terms of accuracy, time and number of CSFs followed by MP2 NOs. It appears that, for bond lengths shorter than $4$ Bohr, the MCCI NOs perform less well with only the size of the final state smaller than the standard MOs and this is accompanied by a reduced accuracy and longer calculation times. We suggest that this is because the system is essentially well described by a single reference here and that the $100$ iteration Slater determinant run does not produce natural orbitals, apart from the largest five, with occupations greater than $0.1$ until $R=3.2$ Bohr. Furthermore the calculation to find the MCCI NOs has a wavefunction consisting of only a single Slater determinant for the smallest two bond lengths. We note that with only $50$ iterations it was even less likely to produce a MCCI wavefunction consisting of more than one DET which is why we used $100$ iterations for this system. In this case it would appear that the NPE is less good as we may not do much better compared with using MOs for short bond lengths and may even do worse yet we require more time to calculate the NOs. However we perhaps do better at larger bond lengths than when using MOs. This possible imbalance may contribute to a larger NPE. \n\n\n\nWe now consider all of the twenty FCI points and use either MOs or MCCI NOs for the last five and do not consider MP2 NOs due to the slightly superior performance of QCISD NOs over the first $15$ points. We plot the difference between the FCI result and that of MCCI using either molecular or natural orbitals in Fig.~\\ref{fig:H2OpotCurveError}. There we see that the natural orbitals do give similar results to the molecular orbitals when bond lengths are small, but are more accurate at larger bond lengths. This shows how using QCISD NOs until they are unphysical then proceeding with MCCI NOs results in an accurate curve as the error has a much smaller range than with the other approaches.\n\n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig4.eps}\n\\caption{Energy error (Hartree) against OH bond length $R$ (Bohr) for water in a cc-pVDZ basis with one frozen core using MCCI ($c_{\\text{min}}=10^{-3}$) with MOs, QCISD\/MCCI NOs or MCCI NOs.}\\label{fig:H2OpotCurveError}\n\\end{figure}\n\n\n These observations are quantified in the lower part of Table \\ref{tbl:H2OFirst15andAll} where it seems to be the case that MCCI NOs do better at longer bond lengths as the NPE is even higher using just MCCI NOs. The highest accuracy is achieved when using QCISD NOs for points with $R<4$ Bohr then MCCI NOs for larger $R$ where now the NPE is $1.35$ kcal\/mol, half the size of when using the same procedure but with MOs instead of MCCI NOs. The time required is slightly longer when using the MCCI NOs but represents an increase of less than ten percent compared with the NPE halving. The mean number of CSFs at $1042$ is a substantial reduction of the FCI space which consists of around $8\\times 10^{7}$ Slater determinants when spatial symmetries are neglected. \n\nThis suggests the approach of using QCISD natural orbitals until the natural orbital occupations become unphysical then switching to MCCI natural orbitals. This should mean that a good approximation to the natural orbitals is achieved by QCISD when the correlation is essentially dynamic and then by MCCI when static correlation becomes important. \n\n\n\nThe results show that $\\sigma_{\\Delta E}$ and NPE have the same behaviour with one slight difference being that QCISD NOs over 15 points and QCISD NOs\/ MCCI NOs over 20 points have the same NPE to two decimal places but $\\sigma_{\\Delta E}$ increases a little for QCISD NOs\/ MCCI NOs showing a small decrease in accuracy that is not revealed with the NPE. Using MCCI NOs for the last five points halves the NPE compared with using MOs, but the $\\sigma_{\\Delta E}$ value is $0.55$ of its previous value.\n\n\n\n\n\n\\subsubsection{aug-cc-pVTZ}\nWe now increase the basis size to aug-cc-pVTZ while keeping other parameters the same. Fig.~\\ref{fig:H2OpotCurveAugVTZ} shows that the potential curves behave generally as expected and when using natural orbitals the energy is noticeably lower as dissociation is approached. There are no FCI results for comparison but the curves and results for the cc-pVDZ basis suggest that the NO method should be more accurate.\n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig5.eps}\n\\caption{Energy (Hartree) against bond length (Bohr) of both hydrogens for water with an aug-cc-pVTZ basis using MCCI ($c_{\\text{min}}=10^{-3}$) with either MOs or NOs.}\\label{fig:H2OpotCurveAugVTZ}\n\\end{figure}\n\nFor the first fifteen points where the QCISD response one matrix is physical the calculation is also substantially faster: $0.98$ hours versus 4.6 hours. Furthermore the wavefunctions require fewer CSFs with an average of 1681 CSFS when using QCISD natural orbitals compared with 5744 CSFs when using MOs for the first fifteen points. When using MCCI NOs for the longer bond lengths and considering all points we see in Table \\ref{tbl:H2OaugVTZallPoints} that the calculation is substantially faster, although the improvement is not as great as that seen over the first fifteen points, and the mean number of CSFs is also much smaller. \n\n\n\n\\begin{table}[h]\n\\centering\n\\caption{Results for all points for the double hydrogen dissociation of water in an aug-cc-pVTZ basis.} \\label{tbl:H2OaugVTZallPoints}\n\\begin{tabular*}{8.5cm}{@{\\extracolsep{\\fill}}lcc}\n\\hline\n\\hline\nOrbitals & Mean CSFs & Time (Hours) \\\\\n\\hline\n MOs & 5825 & 7.15 \\\\\n QCISD NOs\/MCCI NOs & 1924 & 2.77 \\\\\n\\hline\n\\hline\n\\end{tabular*}\n\\end{table}\n \n\n\n\nWe note that, when neglecting symmetry, the number of DETs in the FCI space has increased from $8\\times 10^{7}$ to around $7\\times 10^{12}$ when using this larger basis, which is approximately $9\\times 10^{4}$ as many DETs. However MCCI with QCISD NOs takes $3.4$ times as long and with $2.4$ times as many CSFs for the points which it can be applied to compared with results for the method using cc-pVDZ. For all the points, using MOs gave a time scaling of $13.9$ and a scaling of $3.6$ for CSFs compared with the cc-pVDZ MCCI MO calculations. When using QCISD NOs\/MCCI NOs the time scaling was around $6.8$ and about $1.9$ times as many CSFs were required compared with this method using a cc-pVDZ basis. \nThe scalings appear promising when compared with the growth in the size of the FCI space and can hopefully be further improved.\n\n\\subsubsection{Excited state}\n\nWe briefly return to water in a cc-pVDZ basis and as an aside we demonstrate the use of MCCI natural orbitals for an excited state. Here the other types of approximate natural orbitals considered are not available. We note that the current version of MCCI calculates one eigenvalue with the Davidson algorithm so the diagonalization routine can become unstable when dealing with excited states: the program may find itself in a subset of the CSF space so that the previous excited state of interest is now the ground state for example. Hence we only consider one geometry: the first excited state of $A_{1}$ symmetry for water in $C_{2v}$ with $R=2$ Bohr. We now only use fifty iterations to create the MCCI NOs as many DETs are found for the first excited state. Furthermore no orbitals are frozen, however we still employ the approximate NOs in an MCCI CSF calculation. We see in Fig.~\\ref{fig:H2Oexcite} that when using the MCCI NOs the energy is initially substantially higher than with MOs but rapidly decreases and becomes slightly lower than that due to MCCI MOs at convergence. The final MCCI wavefunction uses fewer CSFs when NOs are employed here: 2308 versus 1755. With MOs the time to convergence was 149 seconds while only 102 seconds were needed using MCCI NOs however the calculation of the NOs was more involved here so the total time when using MCCI NOs was around 244 seconds. It would appear that fewer iterations for the calculation of the MCCI NOs may be useful for reducing the total calculation time.\n\n\n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig6.eps}\n\\caption{Energy (Hartree) against number of iterations for water in a cc-pVDZ basis with no frozen cores and an OH bond length of $R=2$ Bohr using MCCI ($c_{\\text{min}}=10^{-3}$) with MOs or MCCI NOs compared with the FCI result.}\\label{fig:H2Oexcite}\n\\end{figure}\n\n Future work is planned using state-averaging of the MCCI wavefunction for a few states to reduce instabilities in the calculation and enable better calculation of excited potential energy curves. There is also the possibility of other spin states being reached in the MCCI DET calculation and the consideration of more than one eigenvalue may allow better discrimination of these spin states. The reasonably promising result for the use of NOs in the calculation of an excited state should hopefully be improved upon using these approaches. \n\\subsection{N\\subscript{2}}\n\nWe now consider the MCCI potential energy curve for N\\subscript{2} dissociation with two frozen cores in a cc-pVDZ basis. Here fifty iterations are used to create the MCCI NOs. The fifteen FCI results are gathered from Refs.~\\onlinecite{LarsenN2FCI2000,GwaltneyN2FCI2002,chanN2FCI2002}.\n\nSimilar to our findings for water the MCCI run for small $R$ does not result in a state beyond that comprising the occupied MOs. This occurs for both cut-offs we consider, so we continue with the use of QCISD NOs until they become unphysical when we switch to MCCI NOs. This does not occur until the last FCI point (2.225 angstroms) in this case. We see in Table \\ref{tbl:N2nos} that the use of approximate natural orbitals reduces the calculation time and the average number of states required. The accuracy is also improved by the use of natural orbitals here.\n\n\n\\begin{table}[h]\n\\centering\n\\caption{N\\subscript{2} results with cc-pVDZ and $c_{\\text{min}}=10^{-3}$. NPE and $\\sigma_{\\Delta E}$ in kcal\/mol.} \\label{tbl:N2nos}\n\\begin{tabular*}{8.5cm}{@{\\extracolsep{\\fill}}lccccc}\n\\hline\n\\hline\nOrbitals & NPE & $\\sigma_{\\Delta E}$ & Mean CSFs & Time (Hours) \\\\\n\\hline\n MOs & 6.37 & 1.69 & 2909 & 1.69 \\\\\n QCISD NOs\/MCCI NOs & 5.03 & 1.39 & 2478 & 1.10 \\\\ \n\\hline\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\nWith a smaller cut-off, Table \\ref{tbl:N2nossmallercmin} shows that the calculation takes longer but the speedup due to the use of approximate natural orbitals is of a similar factor. The improvement in accuracy is not quite such a large scaling as for the smaller cut-off but again the approximate NOs have improved calculation time and accuracy.\n\\begin{table}[h]\n\\centering\n\\caption{N\\subscript{2} results with cc-pVDZ and $c_{\\text{min}}=5\\times10^{-4}$. NPE and $\\sigma_{\\Delta E}$ in kcal\/mol.} \\label{tbl:N2nossmallercmin}\n\\begin{tabular*}{8.5cm}{@{\\extracolsep{\\fill}}lccccc}\n\\hline\n\\hline\nOrbitals & NPE & $\\sigma_{\\Delta E}$ & Mean CSFs & Time (Hours) \\\\\n\\hline\n MOs & 3.98 & 1.06 & 7185 & 7.55 \\\\\n QCISD NOs\/MCCI NOs & 3.49 & 0.87 & 5758 & 4.98 \\\\ \n\\hline\n\\hline\n\\end{tabular*}\n\\end{table}\n\nWe see in Fig.~\\ref{fig:N2natpotError} that the error of the MCCI results when compared with FCI decreases when approximate natural orbitals are used and when the cut-off is lowered from $c_{\\text{min}}=10^{-3}$ to $c_{\\text{min}}=5\\times10^{-4}$. The reduction due to the smaller cut-off is greater than that due to using approximate NOs.\n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig7.eps}\n\\caption{Energy error (Hartree) against bond length (Bohr) for N\\subscript{2} when using MCCI with a cc-pVDZ basis, two different cut-off values and either MOs or approximate NOs. }\\label{fig:N2natpotError}\n\\end{figure}\n\n\n\\subsection{CO}\n\n\nWe use a cc-pVDZ basis set to model the dissociation of carbon monoxide and freeze two of the orbitals. For MCCI the cut-off is $c_{\\text{min}}=5\\times 10^{-4}$ and $50$ iterations are used for the generation of the MCCI NOs. The FCI space consists of $4\\times10^{9}$ Slater determinants when neglecting symmetry. We find that the QCISD NOs become unphysical at $R=3.2$ Bohr here and if we consider the $17$ points for bond length smaller than this we see in Table \\ref{tbl:COfirst17Points} that accuracy, time and size of the wavefunction are all improved by the use of NOs. Interestingly the most accurate curve for the first seventeen points is due to the MCCI NOs in contrast to the results for water and N\\subscript{2}. Now the MCCI occupied NOs are never just the occupied MOs. The fastest calculation and fewest CSFs on average both belong to the calculation using QCISD NOs. \n \n\\begin{table}[h]\n\\centering\n\n\\caption{Results for the first $17$ points for CO ($R\\leq 3$ Bohr). NPE and $\\sigma_{\\Delta E}$ in kcal\/mol.} \\label{tbl:COfirst17Points}\n\\begin{tabular*}{8.5cm}{@{\\extracolsep{\\fill}}lcccc}\n\\hline\n\\hline\nOrbitals & NPE & $\\sigma_{\\Delta E}$ & Mean CSFs & Time (Hours) \\\\\n\\hline\n MOs & 3.11 & 0.89 & 7053 & 6.76 \\\\\n QCISD NOs & 2.77 & 0.80 & 5616 & 4.74 \\\\\n MCCI NOs & 1.58 & 0.41 & 6069 & 5.79 \\\\\n\\hline\n\\hline\n\\end{tabular*}\n\\end{table}\n\nThe MP2 natural orbitals become unphysical sooner than those of QCISD for this system: the largest occupation is around $2.03$ at $3$ Bohr but some small negative higher occupations occur at shorter bond lengths.\nThe MCCI point at $3$ Bohr is then of poor accuracy. If we exclude this and compare over the first 16 points we have a NPE of 8.28 kcal\/mol compared with 2.75 kcal\/mol when using the QCISD natural orbitals. This poor performance seems to be due to the occurrence of negative, although small, natural orbital occupations.\n\n\nWe see in Fig.~\\ref{fig:COpotCurve} that the curves appear to be well behaved and are close to the FCI points where they are available. We note that we were unable to calculate FCI points for larger bond lengths due to disk space requirements. For the $19$ points for which we have FCI results the NPE values in kcal\/mol are $1.58$ for MCCI NOs, $3.37$ for MOs, $3.40$ for QCISD NOs\/MCCI NOs while the $\\sigma_{\\Delta E}$ values in kcal\/mol are respectively $0.39$, $1.01$ and $0.91$. It is interesting that the order of MOs and QCISD NOs\/MCCI NOs with regards to accuracy changes in this case depending on whether it is quantified using the NPE or $\\sigma_{\\Delta E}$, although the differences are small.\n \n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig8.eps}\n\\caption{Energy (Hartree) against bond length $R$ (Bohr) for CO with a cc-pVDZ basis using FCI, MCCI ($c_{\\text{min}}=5\\times 10^{-4}$) with either MOs or QCISD NOs.}\\label{fig:COpotCurve}\n\\end{figure}\n\nThe energy error when compared with the available FCI results is depicted in Fig.~\\ref{fig:COpotCurveError}. This reveals that the lowest error is achieved when using QCISD NOs however the error increases with bond length until it becomes similar to that found when using MCCI NOs. The smallest range of errors comes from using MCCI NOs which results in this approach having the lowest NPE and $\\sigma_{\\Delta E}$.\n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig9.eps}\n\\caption{Energy error (Hartree) against bond length $R$ (Bohr) for CO with a cc-pVDZ basis for MCCI ($c_{\\text{min}}=5\\times 10^{-4}$) with either MOs, QCISD\/MCCI NOs or MCCI NOs.}\\label{fig:COpotCurveError}\n\\end{figure}\n\nWe see in Table \\ref{tbl:COallPoints} that, when all points forming the curve are considered, the use of approximate natural orbitals accelerates the calculation, uses fewer CSFs and the potential energy curves suggest that there should be an improvement in accuracy. The fastest was a combination of QCISD and MCCI NOs but the results for the first $19$ points suggest that the most accurate results in this case would perhaps be due to MCCI NOs.\n\n\n\\begin{table}[h]\n\\centering\n\\caption{Results considering all $26$ points for CO.} \\label{tbl:COallPoints}\n\\begin{tabular*}{8.5cm}{@{\\extracolsep{\\fill}}lcc}\n\\hline\n\\hline\nOrbitals & Mean CSFs & Time (Hours) \\\\\n\\hline\n MOs & 8,208 & 20.72 \\\\\n QCISD NOs \/MCCI NOs & 6,993 & 17.88 \\\\\n MCCI NOs & 7,289 & 18.93 \\\\\n\\hline\n\\hline\n\\end{tabular*}\n\\end{table}\n\n\nWe note that the $17$ FCI points up to and including $R=3$ Bohr required around $709$ processor hours which we could approximately equate to 59 Hours when running on 12 processors. This is still ten times slower than the MCCI calculation using MCCI NOs, over the same number of points and furthermore storage space issues meant that the FCI calculation could not be run to convergence for $R\\geq 3.6$.\n\n\n\n\n\n\n\n\n\\section{Second-order perturbation theory}\n\n\n\nThe second-order perturbation scheme for configuration interaction in Ref.~\\onlinecite{HarrisonFCIperturbation} considers an energy lowering\n\\begin{equation}\n\\nonumber \\Delta E_{K}=\\sum_{I} \\frac{|\\bra{I} \\hat{H} \\ket{K} |^{2}}{E_{K}-\\bra{I} \\hat{H} \\ket{I}}.\n\\end{equation}\nHere $\\ket{K}$ is the current CI wavefunction while the sum is over all $\\ket{I}$ which are formed by single and double substitutions from $\\ket{K}$. If a contribution from any $\\ket{I}$ is greater than a threshold then these $\\ket{I}$ are added to the reference space and a new wavefunction found by diagonalising the Hamiltonian. The process is continued until no new states are added to the CI wavefunction and then the final $\\Delta E_{K}$ gives an estimate of the energy lowering due to the neglected states. We use this scheme with the final wavefunction from a MCCI calculation to attempt to account for more of the dynamic correlation (MCCIPT2). For this we use an MCCI version where states are again added on a step following one where all states have been considered for removal. We note that the program is run for $200$ iterations on eight processors without a convergence check here. \n\n\nIf we write $\\hat{H}=\\hat{H_0}+\\hat{H}'$ and have $\\hat{H}_0 \\ket{\\Psi_{MCCI}}=E_{MCCI}\\ket{\\Psi_{MCCI}}$. Then for Slater determinants we have $\\bra{I} \\hat{H} \\ket{K} = \\bra{I} \\hat{H}' \\ket{K}$, but for the non-orthogonal CSFs used in MCCI we need to use $\\bra{I} \\hat{H} \\ket{K}-E_{K}\\bra{I} K \\rangle = \\bra{I} \\hat{H}' \\ket{K}$ in the numerator to give.\n\n\\begin{equation}\n\\nonumber \\Delta E_{K}=\\sum_{I} \\frac{|\\bra{I} \\hat{H} \\ket{K}-E_{K}\\bra{I} K \\rangle |^{2}}{E_{K}-\\bra{I} \\hat{H} \\ket{I}}.\n\\end{equation}\nHere all states are normalised. We use $c_{\\text{min}}$ as the threshold to consider if a state, in the PT2 scheme, should be added to the MCCI wavefunction. We note that for CSFs we use the same procedure\\cite{mcciGreer98} of a random walk through the branching diagram as in MCCI. This followed by the removal of duplicates ensures that the CSFs are linearly independent but may mean that it is conceivable that some CSFs are neglected.\n\nThe slowest step in the original PT2 method was checking if a prospective state was a duplicate in the set of all single and double substitutions or if it should be added to them.\\cite{HarrisonFCIperturbation} Given the size of $I$ as $N_{I}$ then this requires $O(N_{I}^{2})$ operations if we check each new member against all previous and assume that the size of the space without duplicates is approximately a constant fraction of the size of $I$. As $N_{I}$ is expected to be very large compared with the states in the MCCI wavefunction, we consider sorting the list of $I$ by alpha and beta string using the quicksort algorithm.\\cite{Hoare62} This will tend to need $O(N_{I}\\log(N_{I}))$ operations followed by one pass through the sorted list of $O(N_{I})$ to delete repeated states. We also have to delete any members of $K$ in $I$ but this is quick as $K$ is small in comparison with $I$. The set of $I$ can then be split amongst processors to calculate $\\Delta{E}$ in parallel but this is currently implemented only in the case of Slater determinants. We note that a small test calculation with $10$ CSFs in the final MCCI wavefunction required ten times longer when not using the new method of removing duplicates. \n\nWe test MCCIPT2 on N\\subscript{2} in a cc-pVDZ basis with two frozen cores and a MCCI cut-off of $c_{\\text{min}}=10^{-3}$. The MCCI calculations are carried out using eight processors. Two hundred iterations are used for each MCCI calculation.\n\nSlater determinants did not work so efficiently here: new states were discovered when using MCCIPT2 and we found that running another MCCI calculation each time with the reference taken as the last MCCI wavefunction plus the added PT2 states until no new states were found was necessary to achieve a smooth potential curve. Nevertheless the use of MCCIPT2 improved the accuracy from an NPE of 11.01 kcal\/mol for MCCI with PT2 states to an NPE for 6.53 kcal\/mol for MCCIPT2 while $\\sigma_{\\Delta E}$ reduced from 3.12 kcal\/mol to 1.86 kcal\/mol.\n\nWhen using CSFs no states were found by the PT2 procedure with a large enough contribution to be added to the MCCI wavefunction. This suggests that, with regards to our requirement for adding states using PT2, the MCCI wavefunction is, in a sense, optimum when using CSFs here. We see in Fig.~\\ref{fig:MCCIPT2csfN2} that the MCCIPT2 curve appears to be of higher accuracy as at times it is difficult to distinguish from the FCI curve on the scale of the graph. The plot of differences between the MCCI and FCI energies (Fig.~\\ref{fig:MCCIPT2csfN2Error}) shows that the errors are much smaller using MCCIPT2 and a little more balanced. The NPE for MCCI here was similar to previous MCCI calculations for nitrogen at $6.18$ kcal\/mol and this was reduced to 3.42 kcal\/mol when using MCCIPT2. The $\\sigma_{\\Delta E}$ value was lowered from 1.83 kcal\/mol to 0.92 kcal\/mol by using MCCIPT2. \n\n\n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig10.eps}\n\\caption{Energy (Hartree) against bond length (angstrom) for N\\subscript{2} with a cc-pVDZ basis using MCCI and MCCIPT2 with CSFs and $c_{\\text{min}}=10^{-3}$, compared with FCI results.\\cite{LarsenN2FCI2000,GwaltneyN2FCI2002,chanN2FCI2002}}\\label{fig:MCCIPT2csfN2}\n\\end{figure}\n\n\n\\begin{figure}[ht]\\centering \n\\includegraphics[width=.45\\textwidth]{Fig11.eps}\n\\caption{Energy error (Hartree) against bond length (angstrom) for N\\subscript{2} with a cc-pVDZ basis when using MCCI or MCCIPT2 with CSFs and $c_{\\text{min}}=10^{-3}$. }\\label{fig:MCCIPT2csfN2Error}\n\\end{figure}\n\nThe time for a a single-point MCCI calculation on 8 processors of $200$ iterations ranged from less than one minute to around $1.3$ hours as $R$ increased here. While the total time including the PT2 calculation on one processor for this proof of concept program ranged from less than four minutes to almost 2 hours as R increased. The number of states comprising the MCCI wavefunction ranged from around $1000$ to almost $5000$ as $R$ increased while the states in the PT2 energy lowering calculation ranged from $0.4$ million to $1.6$ million. The accuracy of MCCIPT2 at $c_{\\text{min}}=10^{-3}$ was better than MCCI at $c_{\\text{min}}=5\\times 10^{-4}$ (see Table \\ref{tbl:N2nossmallercmin}) however the time was longer at around $10.7$ hours but this was on $8$ processors and without a convergence check. If we consider the time for PT2 only (4.18 Hrs) and reasonably assume it would be similar if used on the MO MCCI $c_{\\text{min}}=10^{-3}$ results of Table \\ref{tbl:N2nos} then this suggests a time of around 5.9 Hours which would be faster than MCCI with MOs at $c_{\\text{min}}=5\\times 10^{-4}$ but not MCCI with approximate natural orbitals at this cut-off. The results are encouraging and the PT2 CSF code for MCCI has room for improvement, e.g., parallelisation and more efficient calculation of matrix elements.\n\n\n\n\n\\section{Summary}\n\n\nWe introduced a way to approximate natural orbitals in MCCI and we have seen that approximate natural orbitals from an MP2, QCISD or MCCI run could reduce the time and number of states necessary for a single-point MCCI calculation when using Slater determinants. We introduced a measure of accuracy of a potential curve ($\\sigma_{\\Delta E}$) that takes into account that the curve can be shifted by a constant but, unlike the non-parallelity error, considers all points in the curve. For the curves considered in this paper the behavior of each measure was usually similar although there were occasions when the NPE did not change but $\\sigma_{\\Delta E}$ did, or that the ordering of accuracy using the two approaches was changed for small differences.\n\nFor the potential curve for double hydrogen dissociation of water in a cc-pVDZ basis we found that if the QCISD or MP2 natural orbitals became unphysical the accuracy of the MCCI potential curve could be severely impacted. The results suggested the use of QCISD natural orbitals until they had occupations greater than two or negative occupations, then switching to MCCI natural orbitals (QCISD NOs\/MCCI NOs) offered the largest improvement in accuracy and number of CSFs and took less time than when using molecular orbitals. Similar results were seen for the potential curve for N\\subscript{2} in a cc-pVDZ basis. We noted that the MCCI natural orbitals could be unsuitable at bond lengths where a single reference was a good approximation as here the only occupied MCCI natural orbitals were the occupied molecular orbitals when using short MCCI calculations. We used the approach of QCISD NOs\/MCCI NOs for a potential curve of water in an aug-cc-pVTZ basis and saw good improvements in calculation time and the number of CSFs required. The scaling in calculation time compared with the cc-pVDZ basis was very much smaller than the increase in the size of the FCI space.\n\nFor the potential curve for the dissociation of carbon monoxide the MCCI potential curve was most accurate when using MCCI natural orbitals for the points that we had FCI results for. The calculation time for the entire curve was a little longer than when using QCISD then MCCI natural orbitals but was still better than using molecular orbitals. We note that the use of approximate natural orbitals here did not always improve convergence or reduce the error. However by using QCISD NOs\/MCCI NOs in MCCI calculation speed and accuracy were seen to be increased when compared with results using MOs. This small sample of molecules seems to suggest that the MCCI natural orbitals should be used unless there are many MCCI natural orbitals with zero occupation at the start of the curve then QCISD NOs\/MCCI NOs should be employed.\n\nWe saw that an adaptation of a second-order perturbation scheme\\cite{HarrisonFCIperturbation} combined with MCCI (MCCIPT2) could run faster when using a new method to remove duplicates in the space of single and double substitutions of the reference. We found that at the same level of cut-off, the MCCIPT2 calculation with Slater determinants was much less efficient than that with CSFs. MCCIPT2 gave results with higher accuracy than the MCCI calculation alone for the potential curve of the dissociation of the nitrogen molecule. \n\n\n\\acknowledgements{We thank the European Research Council (ERC) for funding under the European Union's Seventh Framework Programme (FP7\/2007-2013)\/ERC Grant No. 258990.} \n\n\n\\providecommand{\\noopsort}[1]{}\\providecommand{\\singleletter}[1]{#1\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nElectromagnetic forbidden, both magnetic dipole (M1)\nand electric quadrupole (E2), transitions of Mo VI are important for\ntemperature and density estimations of tokamak plasmas\n\\cite{feldman, herter}, especially in the collision-radiative\nmodel \\cite{fournier, quinet}. Long lifetimes of metastable states\nare dominated by these forbidden transitions and these states are\ngenerally difficult to observe in the laboratory plasmas due to strong\ncollisions. However, these forbidden transitions of Mo VI have been\nobserved in laboratory in electron spin resonance experiment\n\\cite{baur} and therefore, they must be one of the sources of\ndensity estimations in astrophysical plasmas where collisions are\nvery low due to high dilute interstellar medium \\cite{charro}. Accurate\nestimation of abundances of molybdenum in the atmosphere of the\nevolved stars are important to understand the stellar nucleosynthesis\n\\cite{Orlov}.\n\nHexavelent molybdenum, isoelectronic to rubidium with 4p$^6$4d as\nground state configuration, is generated by electron impact in the\natomic collision process. The electron-impact ionization of\nmultiply charged Mo ions, relevant to astrophysics and laboratory\nplasma research, have also been investigated \\cite{hathiramani}.\nRecently, Fisker et al. have given the possibility of the origin\nof the lightest isotope of molybdenum in proton rich type II\nsupernova \\cite{fisker}. The necessity of accurate estimation of allowed\ndipole transition strengths to find out their mixing these effects in the\ndipole forbidden transitions in Mo VI is explicitly discussed by T.\nYamamoto \\cite{yamamoto}. Again, the transition strength between the\nfine structure states of 4d lavel can reflect the electronic structure\nof Mo VI in crystal \\cite{szotek}.\n\nA few calculations have been carried out to study the on electric dipole\n(E1) transitions in Mo VI over the last few decades using the mean-field\ntheory \\cite{migdalek,zilitis}. More recently, J. Reader \\cite{reader} has\nestimated the E1 transition probabilities among low-lying states by\nestimating transition strengths in the semiempirical approach with\nthe experimental excitation energies.\n\nFor this single reference system, Mo VI, we have performed\nrelativistic coupled-cluster (RCC) calculation with single (S),\ndouble (D) and partial triple (T) excitations in the framework of\nFock space multi-reference (FSMR). Both the excitation energies and\ntransition probabilities are determined using this RCC method using\nwhich lifetimes of many low-lying states are estimated.\n\n\n\\section{Theory and Method of Calculations}\n\n\\subsection{Theory}\n\nThe oscillator strength for E1 transition from $|\\Psi_f\\rangle$ to\n$|\\Psi_i\\rangle$ is given as\n\\begin{equation}\nf_{fi} ={2\\over {3g_f}}\\Delta E_{fi}\\times |D_{fi}|^2 ,\n\\end{equation}\\label{eq7}\nwhere $\\Delta E_{fi}$ is the excitation energy between the upper\nand lower states and $g_f=2J_f+1$ is the degeneracy factor of the\nupper state with total angular momentum $J_f$.\n\nThe single particle reduced matrix elements for the E1, E2 and M1\ntransition operators are given in \\citep{johnson}. The emission\ntransition probabilities (in sec$^{-1}$) for the E1, E2 and M1\nchannels from states {\\it f} to {\\it i} can be expressed as\n\\begin{equation}\nA^{E1}_{fi} =\n\\frac{2.0261\\times10^{18}}{\\lambda^{3}(2j_f+1)}S^{E1},\n\\end{equation}\n\\begin{equation} A^{E2}_{fi} =\n\\frac{1.11995\\times10^{18}}{\\lambda^{5}(2j_f+1)}S^{E2},\n\\end{equation}\n\\begin{equation}\nA^{M1}_{fi} =\n\\frac{2.69735\\times10^{13}}{\\lambda^{3}(2j_f+1)}S^{M1},\n\\end{equation}\nwhere $S^O = {|{\\langle \\Psi_f|O|\\Psi_i\\rangle}|}^2$ is the transition strength\nfor the coressponding operator $O$ (in a.u.) and $\\lambda$ (in\n\\AA ) is the corresponding transition wavelength.\n\nThe lifetime of a particular excited state $i$ can be computed by the\nreciprocal of the total transition probability, $\\sum_{j} A_{ij}$ (in\nsec$^{-1}$), arising from all possible states $j$ due to spontaneous\nelectromagnetic transitions, i.e.\n\\begin{equation}\\label{eq513}\n\\tau_{i} = \\frac {1}{\\sum_{j}{A_{ij}}}.\n\\end{equation}\n\n\\subsection{Fock Space Multi-reference RCC theory}\n\nThe FSMRCC method is one of the most powerful highly correlated\nmany-body approaches due to its all order structure to account the\ncorrelation effects \\cite{lindgren}. The FSMRCC, which is mainly meant for\nmulti-reference systems, is used here for the one valence electron and\nhas been described in details elsewhere \\cite{lindgren, mukherjee,\nHaque, Pal}. Here we present the method briefly.\n\nWe first consider the Dirac-Coulomb Hamiltonian for a closed-shell\n$N$ electron system which is given by\n\\begin{equation}\\label{eq2.1}\n{\\mbox\nH}=\\sum_{i=1}^{N}\\left[c\\vec{\\alpha_{i}}\\cdot\\vec{p}_{i}+\\beta\nmc^{2}\n +V_{\\mathrm{Nuc}}(r_{i})\\right]+\\sum_{i{\\columncolor[HTML]{EFEFEF}}l |l|}\n\\hline\n\\cellcolor[HTML]{C0C0C0}Full name & \\cellcolor[HTML]{C0C0C0}Acronym \\\\ \\hline\nNPM & Nasal Passage Model \\\\ \\hline\nTSPD & Target Site Particulate Deposition \/ Delivery \\\\ \\hline\nCT & Computed Tomography \\\\ \\hline\nCRS & Chronic Rhinosinusitis \\\\ \\hline\nOMC & Ostiomeatal Complex \\\\ \\hline\nCFD & Computational Fluid Dynamics \\\\ \\hline\nDICOM & Digital Imaging and Communications in Medicine \\\\ \\hline\nSTL & Stereolithography \\\\ \\hline\nROI & Region of Interest \\\\ \\hline\nNPD & Nozzle Positioning Device \\\\ \\hline\nCU & Current Use (\\textit{spray usage protocol}) \\\\ \\hline\nLoS & Line of Sight (\\textit{spray usage protocol}) \\\\ \\hline\n\\end{tabular}\n\\end{adjustbox}\n\\end{table}\n\n\n\\subsection{Inspiratory airflow and sprayed droplet transport simulations}\n\nLaminar steady-state models work as a reasonable approximation while modeling comfortable resting to moderate breathing\\cite{kelly2000jap, kelly2000, kimbell2019lsm, xi2008ijhmt, shanley2008it}. Furthermore, with our simulations focusing on a single cycle of inspiration, steady state flow conditions were adopted as a feasible estimate. Based on the principle of mass conservation (\\emph{continuity}), and assuming that the airflow density stays invariant (\\textit{incompressibility}), we have\n\\begin{equation}\\label{e:continuity}\n\\nabla \\cdot \\mathbf{u} = 0,\n\\end{equation}\nwith $\\mathbf{u}$ representing the velocity field for the inspired air. Conservation of momentum under steady state flow conditions leads to the modified Navier-Stokes equations:~\n\\begin{equation}\\label{e:NS}\n\\rho\\left(\\mathbf{u} \\cdot \\nabla \\right)\\mathbf{u} = -\\nabla p + \\mu {\\nabla}^2 \\mathbf{u}+\\rho\\mathbf{b}.\n\\end{equation}\nHere $\\rho = 1.204$ kg\/m$^3$ represents the density of air, $\\mu = 1.825\\times10^{-5}$ kg\/m.s is air's dynamic viscosity, $p$ is the pressure in the airway, and $\\mathbf{b}$ stands for accelerations induced by different body forces. To simulate the airflow, equations~(\\ref{e:continuity}) and (\\ref{e:NS}) were numerically solved\nthrough a finite volume approach, in the inspiratory direction. The computational scheme on ANSYS Fluent\\textsuperscript{TM} v14.5 employed a segregated solver, with SIMPLEC pressure-velocity coupling and second-order upwind spatial discretization. Solution convergence was obtained by minimizing the flow residuals (viz.~mass continuity$\\,\\sim\\mathcal{O}(10^{-2})$, velocity components$\\,\\sim\\mathcal{O}(10^{-4})$), and through stabilizing the mass flow rate and the static outlet pressure at the nasopharynx of the digital models. A typical simulation convergence run-time with 5000 iterations clocked approximately 10 hours, for 4-processor based parallel computations executed at 4.0 GHz speed. \n\n\n\n\\begin{table}[b]\n\\caption{Parameters for inhalation airflow.}\\label{Table0b}\n\\vspace{-0.35cm}\n\\begin{center}\n\\includegraphics[width=0.75\\textwidth]{Table_1.pdf}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{table}\n\n\nThe numerical solutions implemented the following set of boundary conditions: (1) zero velocity at the airway-tissue interface i.e.~the tissue surface lining the sinonasal airspace (commonly called \\emph{no slip} at the walls), along with ``trap'' boundary conditions for droplets whereby a droplet comes to rest after depositing on the wall; (2) zero pressure at nostril planes, which were the pressure-inlet zones in the simulations, with ``escape'' boundary condition for droplets that allowed outgoing trajectories to leave the airspace through the nostril openings; and (3) a negative pressure at the nasopharyngeal outlet plane, which was a pressure-outlet zone, also with an ``escape'' boundary condition for droplets. The negative nasopharyngeal pressure generated an inhalation airflow rate within $\\pm~5-6\\%$ of the subject-specific measurement of resting breathing, obtained using LifeShirt\\textsuperscript{\\textregistered} vests\\cite{wilhelm2003bm} that tracked chest compression\/expansion during breathing, and accordingly quantified the inhalation rates (see Table~\\ref{Table0b}).\n\nAfter simulating the airflow, sprayed droplet dynamics were tracked through discrete phase particle transport simulations in the ambient airflow, and the corresponding Lagrangian tracking estimated the localized deposition along the airway walls through numerical integration of the following transport equations\\cite{fluent14point5}:\n\\begin{equation}\n\\frac{d \\mathbf{u_d}}{dt} = \\frac{18\\mu}{d^2 \\rho_d }\\frac{C_D Re}{24}(\\mathbf{u}-\\mathbf{u_p})+\\mathbf{g}\\left(1- \\frac{\\rho}{\\rho_d}\\right) + \\mathbf{F_B}.\n\\end{equation}\nThe parameters here are $\\mathbf{u_d}$, representing the droplet velocity; along with $\\mathbf{u}$ as the airflow field velocity, $\\rho$ and $\\rho_d$ respectively as the air and droplet densities, $\\mathbf{g}$ as the gravitational acceleration, $\\mathbf{F_B}$ as any other additional body forces per unit droplet mass (as for example, Saffman lift force that is exerted by a typical flow-shear field on small particulates transverse to the airflow direction), and $18\\mu\\, C_D\\, Re\\,(\\mathbf{u}-\\mathbf{u_d})\/24(d^2 \\rho_d )$ quantifies the drag force contribution per unit droplet mass. Here, $C_D$ is the drag coefficient, $d$ is the droplet diameter, and $Re$ represents the relative Reynolds number.\n\nMean time step for droplet tracking was in the order of $10^{-5}$ sec., with the minimum and maximum limits for the adaptive step-size being $\\sim\\mathcal{O}(10^{-10})$ sec.~and $\\sim\\mathcal{O}(10^{-3})$ sec., respectively. Also note that the solution scheme posits the particulate droplets to be large enough to ignore Brownian motion effects on their dynamics. Post-processing of the simulated data laid out the spatial deposition trends, which were then tallied against \\textit{in vitro} observations.\n\n\n\n\\subsection{3D printing and physical experiments}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{3Dprinting_v5_compressed.pdf}\n\\caption{(a) \\textit{In silico} model: CT-based digital reconstruction of subject 1's airway. Panels (b) and (c) respectively show the sagittal and coronal views of the solid 3D-printed replica of the digital model. Note that the solid models comprise a soft outer nose (to mimic the pliability of a real nose) and a posterior hard plastic part. The anterior and posterior 3D-printed components in each model were designed to fit snugly together. Panels (d) and (e) depict the experimental setup for \\textit{in vitro} measurement of sprayed deposits in anatomic solid models.}\\label{f:3Dprinting}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{figure}\n\nTo assess the reliability of numerically predicted topical deposition vis-\u00e0-vis physical experiments, 3D-printed anatomic replicas were generated for subject 1's airway and hence included both NPM1 and NPM2. The posterior parts of the solid models were made from the stereolithography material Watershed\\textsuperscript{\\tiny\\textregistered} (DSM Somos, Elgin, Illinois). Post-digitization, the printing job of the posterior component was sub-contracted to ProtoLabs (Morrisville, North Carolina). Printing of the anterior soft plastic part on a Connex3\\textsuperscript{TM} 3D printer was done by Ola Harrysson's group at North Carolina State University (at the Edward P Fitts Department of Industrial and Systems Engineering), using polymer inkjetting process on Tangogray FLX950 material. See Figure~\\ref{f:3Dprinting}(a)-(c) for representative pictures of a digitized model and the corresponding 3D replica.\n\n\n\\subsubsection{Recording deposits through gamma scintigraphy:}\nIntra-nasal topical delivery was tracked through \\textit{in vitro} examination of mildly radioactive spray deposits in the 3D-printed anatomic replicas. To ensure that the spray axis orientation and nozzle location aligned with the corresponding simulated spray parameters, we used specially designed nozzle positioning devices (NPD) inserted at the nostril. The spray bottle was fitted into the NPD, while administering the spray via hand-actuation. For each sample test, a bottle of commercial nasal spray Nasacort\\textsuperscript{TM}~was labeled with a small amount of radioactive Technetium (Tc99m) in saline. At the time of dispensing the spray shots, a vacuum line controlled by a flow-valve was used to set up inhalation airflow through the model, and the flow rate was commensurate with the subject-specific breathing data (Table~\\ref{Table0b}). Corresponding setup is in Fig.~\\ref{f:3Dprinting}(d)-(e). Four independent replicate runs of each spray experiment were conducted, followed by compilation of the means and standard deviations of the drug deposits along the inner walls of the solid models. The topical deposition was proportional to the radioactive signals emitted from the spray solution traces that deposited inside a solid model and was quantifiable through image-processing of the scintigraphy visuals, collected using a BodyScan (MieAmerica, Forest Hills, IL) 400-mm width by 610-mm height 2D gamma camera. The pixel domain was 256$\\times$256, with an image acquisition time of 3 minutes; and one pixel equated to a Cartesian distance of 2.38 mm in the digital and 3D models.\n\n\n\n\\begin{figure}[t]\n\\vspace{-0.15cm}\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{Experiment_Gridlines_v3_compressed.pdf}\n\\caption{Panels (a), (b), and (c) depict the gridline schematic on NPM1 and NPM2, that is used to extract the deposition fractions from the gamma scintigraphy-based quantification of the sprayed deposits in the solid replicas. The models are respectively segregated into 3 sets of compartments: sagittal columns, frontal columns, and sagittal rows. Panel (d) shows the perturbation of the base gridline by 1 pixel. Representative Technetium signals are in panel (e). Note:~In regard to the axis system, the circle with solid dot implies out-of-plane direction from this page, the circle with cross signifies into-the-plane of this page.}\\label{f:gridlines}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\n\n\\subsubsection{Model segmentation for comparison with numerical data:}\\label{s:gridlines}\nTo facilitate the comparison between the numerical predictions on droplet deposition and the physical observation of gamma scintigraphy signals in the corresponding solid replica, we segregated NPM1 and NPM2 into virtual segments oriented along three different directions. Figure~\\ref{f:gridlines} lays out the Cartesian coordinate directions for the 3D space. X was perpendicular to the sagittal plane traversing from left to right sides of the nasal models (with the model head facing forward), Y was perpendicular to the axial plane traversing from inferior to superior aspects of the models, and Z was perpendicular to the coronal plane traversing from anterior to posterior aspects of the models. The virtual segments were oriented along the XY (coronal), YZ (sagittal), and ZX (axial) planes. Parallel to the XY coronal plane, the models contained 12 segments (named, C$12$ -- C$1$ $\\Rightarrow$ sagittal columns); there were 9 compartments (C$1$ -- C$9$ $\\Rightarrow$ frontal columns) parallel to the YZ sagittal plane, and there were 12 compartments (R$1$ -- R$12$ $\\Rightarrow$ sagittal rows) parallel to the ZX axial plane (see Figure~\\ref{f:gridlines}).\n\nFor each compartment, the particulate deposition fraction predicted from the simulation was compared with the deposition fraction measured based on gamma signals of the deposited particulates in the corresponding compartment of the 3D-printed model. To achieve this, signals emitted from the solution traces, that settled along the airway walls, were subjected to image processing analysis. Therein, by superimposing the compartmental grid on the radio-images, the signals were extracted from each compartment. In order to align the grid on the image in a manner consistent with the virtual model, three inset discs were designed as reference points on the outer surface of the virtual and 3D-printed models. Americium sources from commercial in-home smoke detectors were inserted into the insets as reference points on the 3D-model and a radio-image was recorded. For the analysis, the scintigraphy images were processed using ImageJ\\cite{schneider2012nature} by constructing a region of interest (ROI) referenced to the fixed Americium sources. Care was taken to align the emitted visual signals with similar reference regions within the superimposed grid. This was done via manual visualization to achieve a best fit of signal intensity within reference regions. The grid compartment planes positioned using this visual best-fit technique were designated as ``reference planes''. Given the nature of the radioactive signals and the resolution of the radio-image, some signal intensity resided outside of reference regions even while using best-fit practices. A reasonable fit could be obtained by shifting the image by one pixel in either direction (positive shift \/ negative shift). In order to account for this variation, alternative plane positions (see Figure~\\ref{f:gridlines}(d)) were created by shifting the reference planes one pixel along the positive and negative axes for each set of Cartesian planes. These three sets of compartment planes were positioned in the \\textit{in silico} modeling software using the measured distances from the reference regions. The corresponding Cartesian coordinates of these planes were used to assign droplet deposition locations from the computational simulations to grid compartments, for comparison with the \\textit{in vitro} model. In these comparisons, we left out the deposits in the anterior nose (from the CFD data as well as the physical recordings) in order to negate the bright radiation signal coming from that zone in the experimental deposits; and focused only on measurements from the posterior parts of the respective models. Note that the anterior nose in an \\textit{in silico} model is in fact the removable soft pliable anterior part in the corresponding 3D print (e.g.~see Figure~\\ref{f:3Dprinting}).\n\n\n\\begin{figure}[t]\n\\vspace{-0.15cm}\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{ParticleSites_v3_compressed.pdf}\n\\caption{Comparison of representative trajectories for a 5$\\mu$ droplet and a 25$\\mu$ droplet in a sample sinonasal airspace. In panel (a), the smaller droplet has weaker inertial momentum and the ambient airflow streamline takes over its motion much earlier than that in case of a heavier droplet like the one in panel (b), where the inertial momentum of the 25$\\mu$ droplet persists longer. The small red circle in (a) depicts the point where the inertial momentum gets overwhelmed by the fluid streamline. Evidently, owing to smaller inertia, the droplets with smaller diameters get predominated by the airflow streamlines earlier than the bigger droplets. This results in a better penetration and spread of sprayed droplets in the nasal airspace, as shown in panel (c), for a different nasal model. On the contrary, spray shots with exclusive share of bigger droplets (e.g.~$\\ge$ 100$\\mu$ here) tend to follow their initial inertial trajectories, without much effect of the airflow streamlines on their paths, and deposit along the anterior walls of the nasal airspace, as depicted in panel (d). The red boundaries in panels (c) and (d) highlight the difference in particulate penetration into the model, in the two cases. Note:~These images were created using FieldView\\textsuperscript{TM}, as provided by Intelligent Light through its University Partners Program.}\\label{f:inertial}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\\begin{figure}[b]\n\\vspace{-0.15cm}\n\\begin{center}\n\\includegraphics[width=0.8\\textwidth]{CU_v3.pdf}\n\\caption{(a) Sample pictorial usage instructions, available for over-the-counter nasal spray Flonase\\textsuperscript{TM}; use of the graphic is subject to copyrights\\cite{flonase2013}. Panel (b) and inset (c) depict the protocol implemented in the numerical simulations for the ``Current Use'' (CU) spray orientation. Note that $\\delta$ is the linear distance between lateral wall and septum (the cartilaginous ``mid-wall'' in the nose, separating right and left airways) at 5-mm insertion depth into the nose. The model ``head'' is tilted forward by 22.5$^{\\circ}$. The vertically upright dashed line represents the spray nozzle axis.}\\label{f:cu}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\\begin{figure}[t]\n\\vspace{-0.15cm}\n\\begin{center}\n\\includegraphics[width=1\\textwidth]{LoS_v3_compressed.pdf}\n\\caption{Panels (a) and (b) show the locations of the main target sites in a representative sinonasal reconstruction, i.e.~the OMC (acting as the mucociliary drainage pathway for the sinuses) and the sinus cavities. Panels (c)-(e) demonstrate the ``Line of Sight'' (LoS; represented by the black lines) in NPM1. The anatomic zone, colored red, marks the OMC. Note that panel (d) is the 3D-printed soft nose from NPM1, exhibiting the same approximate orientation as that of the digital model in panel (c), giving a direct straight-line access to the target sites, and hence an LoS. The blue component in the image on panel (d) indicates the approximate location of the OMC.}\\label{f:los}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\n\n\\subsection{Identification of target site and spray parameters}\\label{s:StudyDesign}\n\n\\subsubsection{Effect of airflow on droplet trajectories:}\\label{s:FlowPhysics}\nInertial motion of a droplet is linearly proportional to its mass, and hence is exponentially proportional to the droplet diameter. Consequently, for bigger droplets, the inertial motion persists longer before being taken over by the ambient airflow. Figure~\\ref{f:inertial}(a) tracks the trajectory of a representative 5$\\mu$ droplet. In there, the tiny red circle marks the location where the inertial motion of the droplet got overwhelmed by the ambient flow, beyond which the droplet trajectory was same as the airflow streamline on which it was embedded at the red circle's location. Note the contrasting 25$\\mu$ droplet trajectory in Figure~\\ref{f:inertial}(b), where the inertial motion persisted longer. The phenomenon has a significant impact on drug deposition trends. The bigger droplets ($\\ge$100$\\mu$) show a greater propensity to hit the anterior walls directly owing to their high initial momentum, while smaller droplet sizes penetrate further into the airspace; see e.g.~Figure~\\ref{f:inertial}(c)-(d). To ensure that the bigger droplets also reach the target sites, we argue that it would be conducive to harness their inertial motion and direct those droplets actively toward the target when they exit the spray nozzle. This can be feasibly achieved by orienting the spray axis to pass directly through an intended anatomic target zone.\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Current use instructions:}\nInconsistency and ambiguity in instructions\\cite{benninger2004ohns, kundoor2011pr} indicate a lack of definitive knowledge on the best ways to use a nasal spray device. Different commercial sprayers often offer somewhat contrasting recommendations. However, there is a common agreement (see Figure~\\ref{f:cu}(a)) that the patient should incline her\/his head slightly forward, while keeping the spray bottle upright\\cite{flonase2013, benninger2004ohns}. Furthermore, there is a clinical recommendation to avoid pointing the spray directly at the \\textit{septum} (the separating cartilaginous wall between the two sides of the nose). These suggestions were adopted in our standardization\\cite{kimbell2018rdd} of ``Current Use'' (CU) protocol for topical sprays. The digital models were inclined forward by an angle of 22.5$^{\\circ}$, and the vertically upright\\cite{benninger2004ohns} spray axis was closer to the lateral nasal wall, at one-third of the distance between the lateral side and septal wall. Also, the spray bottle was so placed that it penetrated into the airspace by a distance of 5-mm, inspired by the package recommendations of commercial sprayers\\cite{flonase2013} for a ``shallow'' insertion into the nose. Figure~\\ref{f:cu}(b) lays out the schematics of the CU protocol used in this study.\n\n\n\n\\subsubsection{Target site identification and proposing an alternate spray use criteria:}\nAll sinuses, except sphenoid, drain into the ostiomeatal complex (OMC), it being the main mucociliary drainage pathway and airflow exchange corridor between the nasal airway and the adjoining sinus cavities. To ensure that as many drug particulates reach the sinus chambers and their vicinity as would be possible, we hypothesize that the spray axis should be directed straight toward the OMC. This is supported by our observation of the effect of airflow physics on droplet trajectories (see discussion in Section~\\ref{s:FlowPhysics}). If the spray axis hits the OMC directly, the likelihood that the larger droplets will deposit there is higher. We refer to this usage protocol as ``Line of Sight'' (LoS). Like the CU protocol, the LoS protocol also had the sprayer inserted at a depth of 5-mm into the nasal airspace. Representative LoS orientation is shown in Figure~\\ref{f:los}.\n\nTSPD percentage at the OMC and the sinuses was evaluated as $= 100 \\times \\left(M_{\\textrm{target}}\/M_\\textrm{spray}\\right)$;\nwith $M_{\\textrm{target}}$ being the spray mass of the particulate droplets deposited at the OMC and inside the sinus cavities, and $M_\\textrm{spray}$ being the mass of one spray shot. \n\n\n\n\n\\subsubsection{Generation of varying peripheral directions around the true CU and LoS directions:}\nTo establish the robustness of the TSPD predictions for the CU and LoS protocols, we also tracked droplet transport and deposition when the spray directions were slightly perturbed. Such perturbed peripheral directions for CU initiated 1 mm away on the nostril plane and were parallel to the CU's vertically upright true direction. For LoS, the perturbed peripheral directions were obtained by connecting the base of the true LoS direction on the nostril plane with points that radially lie 1 mm away from a point on the LoS; this specific point being 10 mm away along the LoS from the base of the LoS direction on the nostril plane (e.g.~see bottom panel of Figure~\\ref{fig:CUvsLOS} for an illustrative example).\n\n\n\\subsubsection{Parameters for the simulated spray shot:}\nOver-the-counter Nasacort\\textsuperscript{TM}~(Triamcinolone Acetonide), a commonly prescribed and commercially available nasal spray, was selected for this study. Four units of Nasacort\\textsuperscript{TM}~were tested at Next Breath, LLC (Baltimore, MD, USA) to evaluate the \\textit{in vitro} spray performance. Corresponding plume geometry was analysed through a SprayVIEW\\textsuperscript{\\textregistered} NOSP, which is a non-impaction laser sheet-based instrument. Averaged spray half-cone angle was estimated at 27.93$^\\circ$, and the droplet sizes in a spray shot followed a log-normal distribution. With the droplet diameter as $x$, the droplet size distribution can be framed as a probability density function of the form\\cite{cheng2001jam}:\n\\begin{equation}\nf(x) = \\frac{1}{\\sqrt{2\\pi}x \\ln \\sigma_{g}} \\exp \\left[ -\\frac{(\\ln x - \\ln x_{50})^2}{2 (\\ln \\sigma_g)^2} \\right].\n\\end{equation}\nHere, $x_{50} = 43.81\\mu$ is the mass median diameter (alternatively, the geometric mean diameter \\cite{finlay2001book}) and $\\sigma_g = 1.994$ is the geometric standard deviation. The latter quantifies the span of the droplet size data. Measurements were also made with and without the saline additive in the sprayer, and the tests returned similar droplet size distribution. Note that a saline additive was used during the physical recording of the sprayed deposits. The mean spray exit velocity from the nozzle was 18.5 m\/s, based on phase doppler anemometry-based measurements\\cite{liu2011}. \n\nWhile simulating the droplet trajectories, we assumed typical solid-cone injections and tracked the transport for 1-mg spray shot while comparing the TSPD trends from the CFD predictions with the corresponding experimental drug delivery patterns.\nOn the other hand, 95.0306 mg (which is one shot of Nasacort\\textsuperscript{TM}, as quantified by Next Breath, LLC) of spray mass transport was simulated while comparing the CFD-based TSPD numbers for the LoS and CU protocols in each model. \n\n\n\n\n\n\\begin{table}[t]\n\\caption{Numerical prediction of targeted drug delivery from CU and LoS protocols. The LoS TSPD values that are significantly higher than the corresponding CU TSPD are marked by `$^*$'. \\textbf{Symbols:}~$\\mathbf{\\sigma} = $ standard deviation, $\\mathbf{\\mu} =$ mean.}\\label{Table1}\n\\vspace{-0.35cm}\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{Table_2.pdf}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{table}\n\n\n\n\n\\section{Results}\n\n\n\n\n\\subsection{Comparison between CU and LoS spray usage protocols}\n\nLoS was found to be consistently superior in comparison to the CU spray placement protocol, while targeting the OMC and the sinus cavities for drug delivery. Table~\\ref{Table1} lists the deposition fraction percentages for each spray release condition in the five airway models (NPM1 -- NPM5). For a graphical interpretation, we have plotted the same information on Figure~\\ref{fig:CUvsLOS}. Overall, the deposition fraction for the LoS was on an average 8.0-fold higher than the CU deposition fraction, with the corresponding subject-specific improvement range being 1.8 -- 15.8 folds for the five test models. The improvement does decay when the perturbed peripheral spray directions are compared, to assess the robustness of the LoS protocol's advantage over CU. Considering the varying peripheral directions around the true LoS and CU, the LoS set registered an average 3.0-fold increase in TSPD, with the corresponding subject-specific improvement range being 1.6 -- 4.3 folds.\n\n\n\\begin{figure}[t]\n\\vspace{-0.15cm}\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{CUvsLOS_v9_Continuation_Proposal.pdf}\n\\caption{Comparison of the simulated spray deposits from the CU and LoS protocols. The yellow bars represent the TSPD for the CU spray orientations, and the blue bars quantify the TSPD recorded for the LoS spray orientations. The gray bars are the predicted deposits when the true CU and LoS directions were perturbed by 1 mm. Panels (a)--(e) are the results for five different airway models: Nasal Passage Model 1 (NPM1), Nasal Passage Model 2 (NPM2), Nasal Passage Model 3 (NPM3), Nasal Passage Model 4 (NPM4), and Nasal Passage Model 5 (NPM5). Panel (f) compares the TSPD for peripheral directions in a 0.5-mm perturbation (on the left) with respect to a 1-mm perturbation (on the right) from the true LoS orientation, both in NPM1. As expected from the overall findings, the TSPD increased for the perturbed spray directions that were closer to the true LoS. Panel (g) depicts the spatial perturbation parameters for the LoS spray axis orientation in NPM1.}\\label{fig:CUvsLOS}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\n\n\n\n\\subsubsection{Statistical tests -- on improvements achieved by the revised spray use strategy:}\nLoS was compared to CU through a paired study design on the data from five test models. Table~\\ref{Table2} lays out the computed numbers. For each model, the outcome comprised the percentage of deposition in OMC and the sinuses for both CU and LoS spray usage. Null hypothesis considered for this statistical test assumed that the TSPD would be same for CU and LoS in an airway model. The deposition percentage corresponding to CU and LoS protocols in the same nostril were treated as paired observations for a paired t-test to check the null hypothesis. Owing to a relatively small study cohort, paired Wilcoxon signed rank test was also used for robustness check. In order to study how spatial variation might affect the difference between CU and LoS, three different ways of calculating the percentage of deposition were implemented. The first strategy considered the average deposition from the true LoS and CU directions. The second strategy compared the TSPD averaged from the true CU and LoS directions, along with the deposition data for spray release parameters obtained by perturbing the respective true directions. The third strategy used TSPD averaged exclusively from the deposition data corresponding to the perturbed spray release parameters. This allowed us to assess the robustness of any probable improvement from using LoS, while still accounting for slight spatial variations of the spray direction.\n\n\\begin{table}[b]\n\\caption{Statistical tests for the comparison between CU and LoS protocols.}\\label{Table2}\n\\vspace{-0.35cm}\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{Table_3.pdf}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{table}\n\n\nThe first comparison method demonstrates an average deposition increase of 5.4 percentage points for LoS (6.39-\\% for LoS vis-\\`{a}-vis 0.98\\% for CU). This difference is significant at the 0.05 level with a p-value from the paired t-test of 0.03. The paired Wilcoxon signed-rank test has a p-value of 0.06, which was the lowest possible p-value for the Wilcoxon signed-rank test given only five pairs of data. In the second comparison scheme, LoS has an increased deposition of 1.62 percentage points relative to CU (2.49\\% vis-\\`{a}-vis 0.87\\%). The p-value for this difference is 0.02 using the paired t-test and 0.06 using the Wilcoxon signed rank test. Finally, for the third comparison method, LoS registered an increased deposition of 1.05 percentage points relative to CU (1.90\\% vis-\\`{a}-vis 0.86\\%). The p-value for this difference is 0.02 using the paired t-test and 0.06 using the Wilcoxon signed rank test. This provides a strong evidence that LoS leads to higher percentage of deposition in the OMC and sinuses. The estimated difference is largest when using just the true directions, but the difference is still statistically significant even when using the spray release points obtained by perturbing the true directions. The p-value from the paired t-test is actually lower when the TSPD from just the perturbed points are considered, owing to the reduced variance for the estimated difference. For all three ways of estimating the percentage of deposition, the paired Wilcoxon signed-rank test returns a p-value of 0.06. With only five pairs of data, this suggests that the use of LoS does result in statistically significant higher deposition for all five nostril models.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Comparison of the simulated TSPD predictions with physical experiments}\nFigure~\\ref{fig:Comparison} compares the numerical TSPD predictions with corresponding gamma scintigraphy-based experimental recordings in NPM1 and NPM2. While the compartmental deposits visibly presented a congruous trend in the sagittal columns, sagittal rows, and frontal columns; we conducted additional statistical tests to verify the homogeneity between the two sets of data so as to establish the reliability of the computational findings.\n\n\n\\begin{figure}[t]\n\\vspace{-0.15cm}\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{Experimental_Comparison_SD05PRE_v3.pdf}\n\\vspace{-0.25cm}\n\\caption{(a) Comparison of the numerically simulated compartmental findings in Nasal Passage Model 1, with respect to the gamma scintigraphy recordings from the corresponding 3D-printed replica. (b) Comparison of the numerically simulated compartmental findings in Nasal Passage Model 2, with respect to the gamma scintigraphy recordings from the corresponding 3D-printed replica. The blue ``reference'' lines trace the CFD predictions for TSPD in each compartment, with the light gray and dark gray lines respectively marking the variability in prediction, for +\/- 1 pixel shift while superimposing the gridlines on the numerical data-space. The yellow lines trace the TSPD recorded from the physical experiments.\n}\\label{fig:Comparison}\n\\end{center}\n\\vspace{-0.6cm}\n\\end{figure}\n\n\n\\begin{table}[b]\n\\caption{Comparison between the compartmental data from numerical simulations and physical experiments.}\\label{Table3}\n\\vspace{-0.35cm}\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{Table_4.pdf}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{table}\n\n\n\nTable~\\ref{Table3} gives the Pearson and Kendall's correlation between the numerical and experimental models for the average deposition fractions in NPM1 and NPM2 for the LoS protocol. The confidence intervals are based on 1000 bootstrap samples, instead of asymptotic approximations, because of the relatively small sample size. Based on the output, we can see that the Pearson correlation is consistently very high while the Kendall's correlation is somewhat lower. However, while the Kendall's correlation is frequently thought to be more robust to outliers, particularly for small sample sizes like this data-set; in this particular instance the Pearson correlation is likely more illustrative. This is because the Pearson correlation is able to show that, for the most part, the magnitudes of the estimates are similar and comparable between the numerical and experimental models. In general, there is a strong linear relationship between the percent of deposition prediction from the numerical model and the corresponding physical measurements in the experimental model. The lower Kendall's correlation (overall mean measure 0.78) is largely due to regions where both the numerical and experimental models had very low average deposition but the exact rank of these regions changed considerably between the two data-sets. Note that this does not necessarily indicate a poor performing numerical model. However, the relatively high Pearson correlation (overall mean measure 0.91) does indicate that the numerical models perform well while predicting the sprayed droplet transport.\\\\\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Discussion}\nCFD-guided nasal spray usage defined by the LoS protocol was found to significantly enhance topical drug delivery at targeted sinonasal sites, when compared to currently used spray administration techniques. With increased sample size, this work can be the catalysis toward prompting personalized instructions and specifications for improved use of topical sprays. The findings, thus, have the potential to substantially upgrade the treatment paradigm for sinonasal ailments through the ability to ascertain LoS in individual subjects via endoscopic examinations conducted in the clinic, and to help guide treatment decision-making and patient instructions for spray usage.\n\n\\begin{table}[b]\n\\caption{Comparison of the LoS scores, obtained observationally and through determining the surface area projection of the targeted OMC on the nostril plane.}\\label{Table4}\n\\vspace{-0.35cm}\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{Table_5.pdf}\n\\end{center}\n\\vspace{-0.5cm}\n\\end{table}\n\n\n\\subsection{Concept of LoS scoring and on the adaptability of our findings in clinical practice}\nAs means to quantifying the suitability of a person's airway for the LoS spray protocol, we exploratorily propose a scoring system that is based on how much of the targeted drug delivery sites (OMC, sinuses) are visible when inspected clinically from outside of the nostril. The scoring system will also serve to quantify nasal anatomic variability among individuals. Accordingly, as part of the current study, the LoS scores (see Table~\\ref{Table4}) were first determined observationally, based on the external visibility of the OMC site in the \\textit{in silico} sinonasal reconstructions. We fixed a range of scores $\\in [\\,1$, 4$\\,]$, with 4 being used when the LoS direction was easiest to ascertain. Subjective as that scoring procedure may be, it is similar to what attending physicians will gauge during a clinic visit to determine if a particular patient has a ``line of sight'' in her\/his nasal anatomy. So, to establish the relevance of the findings from this manuscript toward revisions of the therapeutic protocol for sinonasal care, it is important to assess the comparability of the observational LoS scores with more objective score determination techniques. \nThis was achieved by calculating the surface area of the nostril plane and the projected area of the OMC on the plane of the nostril. We computed the ratio of the projected area to the nostril area, as a percentage. Scores of 4 were assigned if the ratio exceeded 6\\%, 3 if the ratio exceeded 4\\%, 2 if the ratio was more than 1.5\\%, and 1 if the ratio was greater than 0\\%. The two scoring techniques yielded very similar results (see Table~\\ref{Table4}), with the highest and lowest scores respectively going to the same anatomic models. Pearson's rank correlation for the two sets of scores was 0.85. While a broader study, involving clinical trials, will be necessary to revise therapeutic protocol for nasal drug delivery, the present results illustrate the easy adaptability of our findings into clinical practice settings.\n\n\n\n\n\n\\subsection{On the comparability of the experimental data with the numerical findings}\nThe computational simulations assumed a laminar framework to mimic steady breathing. However, one may argue that even with resting breathing rates, the airflow often contains transitional features like vortices, emerging from the roll-up of shearing fluid layers during flow-structure interactions\\cite{stremler2014fdr, basu2017jfm2} at the anatomic bends. Some of these nuances are, in fact, difficult to model without proper turbulence simulations\\cite{zhao2014ifar, calmet2019plos}. However, true as that may be, the effect of these flow artifacts on eventual drug delivery in the sinuses has been found to be somewhat nominal while comparing laminar and turbulence simulation results\\cite{basu2017num}. \n\nOn the other hand, the \\textit{in vitro} techniques also often pose challenges. For instance, there can be post-deposition run-off as the deposited solution traces undergo translocation along the inner walls of the solid replica. Such drip-off dynamics can lead to a flawed estimate of regional deposition. \n\nIn the gamma scintigraphy-based method of recording deposits, the radiation signal undergoes some level of scattering and hence in the process of signal extraction from each of the compartments, there is the possibility that signals from one compartment may contaminate the signals at neighboring compartments. To minimize this effect while carrying out the comparisons, the nose (the soft plastic anterior part in the 3D-printed models), which had a bright radiation signal owing to the relatively large amount of anterior deposits, was excluded from both the experimental and numerical data.\n\nFinally, while the inhalation airflow rates were same \\textit{in vitro} and \\textit{in silico}, the airflow partitioning on the two sides of the nasal airways was likely affected by the placement of the NPD, while administering the spray through hand-actuation.\n\n\n\\subsection{Caveats and future implications}\nReaders should note that this was a computational study with validation from spray transport observations in inanimate solid replicas. Also, not every patient will have a clear access to the OMC, and hence may be \\textit{without} an LoS. For instance, in the current study, of the six airway sides in the three study subjects, subject 2's right-side airway did not exhibit an LoS. \n\nThis study, its restricted sample size and limitations notwithstanding, is, to the best of our knowledge, the first-of-its-kind to propose an alternative \\textit{easy-to-implement} strategy that can significantly improve the intra-nasal delivery of topical drugs at the diseased sites. The recommendation for using the ``line of sight'' is user-friendly, personalized (the physician can instruct the patient on the spray usage technique based on a fast LoS check in the clinic), and has the potential to be smoothly incorporated into the nasal standard-of-care. For probable revisions to the clinical regimen, we will need a broader study with more subjects, along with a component for clinical trials to track patient response. Comparison of the numerical data with \\textit{in vivo} spray performance will also eliminate errors that contaminate the \\textit{in vitro} TSPD numbers (e.g.~from drip-off of the deposited solution along the inner wall contours of the 3D-printed models). Nevertheless on a larger intriguing perspective, the current study conclusively postulates how relatively simple engineering analysis and mechanistic tools can usher in transformative changes in the prognosis and treatment protocol for common ailments like nasal congestion. \n\n\\noindent\\hrulefill\n\n\n\n\\vspace{-0.25cm}\n\n\\section*{Acknowledgements}\nThe authors sincerely thank Dr.~John S Rhee, MD, MPH (at the Department of Otolaryngology, Medical College of Wisconsin) for numerous fruitful discussions. Thanks are also due to Dr.~Julie Suman (Next Breath, LLC) for the experimental measurement of nasal spray parameters. The authors additionally acknowledge: (a) Christopher Jadelis (at UNC Chapel Hill) for his assistance on the experimental setup; (b) several past\/present UNC rhinology residents and fellows (Drs.~Andrew Coniglio, Satyan Sreenath, Kibwei McKinney, Gita Madan, Parth Shah, and Stan McClurg) for their inputs; and (c) Dr.~Ola Harrysson's group at NC State University (at the Edward P Fitts Department of Industrial and Systems Engineering), Matthew White (at NCSU), and Dr.~Tim Horn (Director of Research, Center for Additive Manufacturing and Logistics at NCSU) for help on 3D printing. Finally, thanks are also due to Alison Turner and Carolyn Hamby (both at UNC School of Medicine) for their assistance in patient recruitment scheduling.\n\nPreliminary results pertaining to this work have featured at the American Physical Society (APS) -- Division of Fluid Dynamics Annual Meetings \\cite{basu2018aps, basu2017aps} and at the International Society for Aerosols in Medicine (ISAM) Congress \\cite{basu2019isam, farzal2019isam}.\n\nThe project was supported by:~(a)~the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health (NIH), under award number R01HL122154 (PI:~JSK); (b)~the National Center for Advancing Translational Sciences at NIH, through award number KL2TR002490 (PI:~AJK); and (c)~SB's faculty start-up funds at the Department of Mechanical Engineering at South Dakota State University. Content of this study is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.\\\\\n\n\\footnotesize\n\\noindent \\textbf{Contributions:}~SB, GJMG, DOFI, BAS, AMZ, CSE, WDB, JPF, and JSK conceived this study; JSK led the patient recruitment with JW, ZF, MM, SB lending assistance; JSK, SB, ZF, MM developed the digital reconstructions; SB, JSK, OF, ZF ran the numerical simulations; LTH, JW, AB, WDB carried out the physical experiments; SB, KK, GJMG, DOFI, JSK post-processed the numerical and experimental data; JPF, BL, SB ran the statistical tests; BAS, AMZ, CSE, AJK, BDT, ZF, MM facilitated patient recruitment and provided clinical inputs; SB drafted the manuscript. Note that BAS, AMZ, CSE, AJK, BDT are attending physicians at the Division of Rhinology at UNC School of Medicine.\\\\\n\\normalsize\n\n\\noindent\\textbf{Note: This is the pre-peer review version of the manuscript.}\\\\\n\n\\noindent\\hrulefill\n\n\n\\noindent\\textbf{References}\\\\\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nVehicular network (V2X) applications are characterized by huge number of users, dynamic nature, and diverse Quality of Service (QoS) requirements \\cite{masmoudi2019survey}. They are also computation-intensive, e.g., self-driving applications such as semantic segmentation trains and infers from large neural networks \\cite{hofmarcher2019visual}, motion planning solves non-convex optimization problems in real-time \\cite{claussmann2019review,badue2020self}. These applications currently reside in the vehicle's onboard units (OBU) for short latency and low communication overhead. Even with companies such as NVidia developing OBUs with high computation power \\cite{oh2019hardware}, post-production OBU upgrades for higher on-board computation power are typically not commercially viable; and irrespective of local OBU power, the ability to offload tasks to edge\/cloud via multi-access edge computing (MEC) devices increases flexibility, protecting vehicles against IT obsolescence. Hence, offloading is a key technique for future V2X scenarios \\cite{europe6g,you2021towards,5gaausecase1,5gaausecase2}. \n\nCurrently, computation offloading decisions are strictly separated between user side and operating side \\cite{mach2017mobile}. Vehicles act as users and decide what to offload to optimize an individual goal, e.g., latency \\cite{baidya2020vehicular} or energy efficiency \\cite{loukas2017computation}. Apart from expressing their preference through a predefined, static and universal QoS matrix \\cite{masdari2021qos}, users cannot influence how their tasks are prioritized. The operating side centrally prioritizes tasks and allocates resources to optimize a system goal that is based on the QoS matrix; but this goal is not always the same as the users' goals, e.g., task number maximization \\cite{choo2018optimal} or load balancing \\cite{vondra2014qos}. \n\nThis separation between system and user goals poses problems for both user and operating side, especially in the V2X context. V2X users have private goals \\cite{shivshankar2014evolutionary}, are highly autonomous \\cite{martinez2010assessing}, reluctant to share information or cooperate, and disobedient to a central planner \\cite{feigenbaum2007distributed}. They want flexible task prioritization and influence resource allocation without sharing private information \\cite{li2019learning}. On the operating side, an edge cloud computing architecture introduces signaling overhead and information delay in updating site utilization \\cite{mach2017mobile}; coupled with growing user autonomy and service customization, traditional centralized optimization methods for resource allocation become challenging due to unavailability of real-time information and computational intractability.\n\nWe, hence, need an interaction mechanism between user and operating side based on incentives, not rules, and an algorithm that makes decentralized decisions with partial and delayed information in a dynamic environment. There are several challenges with such a mechanism. Users may game the system, resulting in potentially worse overall and individual outcomes \\cite{oh2008few}---the first challenge \\textbf{C1} is therefore how to incentivize user behavior such that users willingly align their private goals to the system goal while preserving their autonomy. The second challenge \\textbf{C2} is finding an algorithm that efficiently learns from partial information with just enough incentive signals, keeping information sharing at a minimum.\n\nThere are different types of learning algorithms for decentralized decision-making \\cite{bowling2002multiagent,weinberg2004best,chang2007no}. However, they face the challenge \\textbf{C3} to trade off optimality and convergence while keeping computation and communication complexity tractable \\cite{feigenbaum2007distributed}. Moreover, in the cases where decisions have long-term effects that are only apparent after a variable delay and where short-term rewards conflict with long-term goals, we need a learning algorithm that connects current action to rewards in the distant future. The challenge \\textbf{C4} is to learn towards long-term goals with delayed and sparse reward signals.\n\nWe propose a decentralized decision-making mechanism based on \\emph{second-price sealed-bid auction} that successfully addresses these challenges. \n\n\\begin{itemize}\n\\item \\textbf{C1}: A bidder has no knowledge of other bidders' bidding prices and it only receives bidding outcome and final price (i.e.\\ payment) as feedback signal---this befits our requirement to limit information sharing. Our mechanism also utilizes the feedback signal to incentivize cooperative behavior and speed up learning. \n\\item \\textbf{C2}: For the dynamic case, we use a multi-agent reinforcement learning (MARL) algorithm, for its ability to learn with partial, noisy and delayed information, and a single reward signal.\n\\item \\textbf{C3}: The RL algorithm learns the best-response strategy updated in a fictitious self play (FSP). FSP addresses strategic users' adaptiveness in a dynamic environment by evaluating state information incrementally and by keeping a weighted historical record \\cite{heinrich2015fictitious}; it is easier to implement than other methods such as \\cite{bowling2002multiagent}, especially with a large state and action space. \n\\item \\textbf{C4}: Furthermore, we use a curiosity learning model to encourage learning with sparse reward signals and a credit assignment model that attributes a delayed reward to historical action sequences.\n\\end{itemize}\n\nAlthough we use the V2X context as an example, we emphasize that our method is not restricted to V2X applications---it can be applied to other applications facing similar challenges. \n\nOur empirical results show that over time, the best-response strategies stabilize and lead to significantly improved individual and overall outcomes. We compare active (learning-capable) and passive (learning-incapable) agents in both synthetic and realistic V2X setups. The synthetic setup shows the performance of the generic learning algorithm that is applicable in many distributed resource allocation scenarios: it successfully incentivizes distributed autonomous users to contribute to any existing centralized resource allocation solution by letting the users prioritize their own tasks. In the realistic setup, V2X-specific factors such as varying vehicle arrival rate and speed, distance to the MEC and communication delay, as well as tasks based on self-driving applications are considered. Our algorithm demonstrates capability to generalize to very different, previously unseen environments without the need for retraining. Each user in the network has its own, constant-size model, and all shared information for modeling is of constant size as well. The distributed nature means it is easily scalable to huge number of users without increased complexity, making it a potential add-on to any existing centralized solutions at the MEC.\n\nTo summarize, our main contributions are:\n\\begin{itemize}\n\\item We formulate computation offloading as a decision-making problem with decentralized incentive and execution. The strategic players are incentivized to align private and system goals by balancing between competition and cooperation.\n\\item We introduce MALFOY, a distributed algorithm that learns based on delayed and noisy environment information and a single, immediate reward signal. Our solution requires to share much less information. We show using extensive simulation that agents with MALFOY outperform agents without learning capabilities on overall resource utilization, offloading failure rate, load variation and communication overhead. \n\\item In a realistic setup based on a concrete mobility model and V2X applications (i.e.\\ self-driving), we further demonstrate MALFOY's flexibility to utilize long-term, sparse extrinsic reward signals with varying delay; it optimizes decision strategy over a long time period. MALFOY with long-term goals further reduces failure rate and shows better generalization properties.\n\\item We open-source our code \\cite{dracosource2} to encourage reproduction and extension of our work.\n\\end{itemize}\n\nSec.\\ref{sec:related} summarizes related work, Sec.\\ref{sec:modelproblem} introduces the system model and formulates the problem, Sec.\\ref{sec:solution} proposes our solution, Sec.\\ref{sec:eval} presents empirical results, Sec.\\ref{sec:conclusion} concludes the paper.\n\n\n\\section{Related Work}\n\\label{sec:related}\n\n\\subsection{Decentralized Decision-Making}\n\nCentralized approaches such as \\cite{kuo2018deploying, agarwal2018joint} for resource allocation and \\cite{lyu2016multiuser, chen2018task,choo2018optimal} for offloading are suited to core-network and data-center applications where powerful central admission control and assignment (ACA) units can be set up, and data can be relatively easily obtained. They are not the focus of our study. \n\nPrevious studies of decentralized systems address some of the issues in centralized approaches. Authors of \\cite{blocher2020letting, stefan2021tnsm} propose a distributed runtime algorithm to optimize system goals but disregard user preferences. \\cite{kumar2014bayesian, kumar2015coalition,chen2015efficient} only consider cooperative resource-sharing or offloading. Although some game-theoretic algorithms naturally deal with \\textit{decentralized incentives}, they often require complete information of the game to \\textit{centrally execute} the desired outcome. For example, \\cite{cardellini2016game} assumes all user and node profiles are known \\textit{a priori}, and \\cite{guo2018mobile} assumes users share information---these assumptions may not be plausible in practice. In other approaches, the complexity of a decentralized system is reduced. \\cite{chen2014decentralized} only considers channel interference in order to model the problem as a potential game and guarantee an equilibrium. \\cite{shams2014energy} only considers discrete actions. \\cite{li2019learning} learns with partial information, but it reduces complexity by assuming a single service type and constant arrival rate. \\cite{khaledi2016optimal} also only considers discrete actions and single service type.\n\nClassic decentralized decision-making mechanisms include dynamic pricing, negotiations, and auctions. Among these mechanisms, auction is most suitable in a dynamic and competitive environment, where the number of users and their preferences vary over time and distribution of private valuations is dispersed \\cite{schindler2011pricing,einav2018auctions}. Auction is common in e.g.\\ networking \\cite{xu2012resource,xu2012interference}, energy \\cite{lucas2013renewable}, and e-commerce \\cite{huang2011design} for its efficient price discovery in a dynamic market with partial information. Among various forms of auction, second-price sealed-bid auction maximizes welfare rather than revenue and has limited information-sharing, hence befitting the requirements in our study. Specifically, our approach is based on Vickrey-Clarke-Groves (VCG) for second-price combinatorial auction \\cite{vickrey1961counterspeculation}. Unlike \\cite{jiang2015data} and \\cite{li2021double}, which use VCG auction mechanism for resource allocation in a stationary environment, our agents react to other agents' behaviors and learn best response strategy in a dynamic environment. As in \\cite{tan2022multi}, we use simultaneous combinatorial auctions as a simplified version of VCG---each bidder bids for all commodities separately, without having to specify its preference for any bundle \\cite{feldman2013simultaneous}. Since it assumes no correlation between commodities, the simplification befits our study of independent service requests.\n\n\\subsection{Suitable Algorithms}\n\nAmong algorithms for decentralized decision-making, \\emph{no-regret algorithms} apply to a wide range of problems and converge fast; however, they require to know best strategies that are typically assumed to be static \\cite{chang2007no}. \\emph{Best-response algorithms} search for best responses to other users' strategies, not for an equilibrium---they therefore adapt to a dynamic environment but they may not converge at all \\cite{weinberg2004best}. To improve the convergence property of best-response algorithms, \\cite{bowling2002multiagent} introduces an algorithm with varying learning rate depending on the reward; \\cite{weinberg2004best} extends the work to non-stationary environments. But both these algorithms provably converge only with restricted classes of games. Besides these algorithms, independent learner methods~\\cite{tan1993multi} such as also proposed for resource allocation \\cite{cui2019multi} are used to reduce modeling and computation complexity, but they fail to guarantee equilibrium \\cite{yang2018mean} and have overfitting problems \\cite{lanctot2017unified}. Finally, federated learning \\cite{mcmahan2017fl} is not applicable as it provides a logically centralized learning framework.\n\nRL algorithms are well-known for their ability to learn sequential tasks and balance between exploitation and exploration \\cite{teng2013reinforcement,almasri2020dynamic}. In our previous work \\cite{tan2022multi}, we proposed a distributed RL algorithm to learn the best response strategy based on immediate reward signals, in a continuous state-action space. In \\cite{tan2022multi}, RL is combined with supervised learning in a fictitious self-play (FSP) method to improve its convergence properties. Although it performed well, the algorithm ignores long-term effects of decision-making. \n\nThis is because RL algorithms are typically ``short-term'' algorithms: \\cite{minsky1961steps} first mentioned the necessity and difficulty of long-term temporal credit assignment in RL---it is essential to associate long-term reward to specific behavior or series of behaviors, such that behaviors that contribute to the long-term reward are prioritized. In RL algorithms with no focus on temporal credit assignment, importance of the immediate reward heavily outweighs estimated reward in the distant future, and the estimation has a bias that is related to the length of delay and exponential to the number of possible states \\cite{arjona2018rudder}. Worse still, if the reward is both delayed and sparse, the reward estimation often has a high variance due to lack of predictable future states, especially with a big state-action space and high variance in the value of next states \\cite{mataric1994reward,shahriari2017generic}. When decisions have long-term effects, such ``short-term'' algorithms would lead to worse performance. It proves to be one of the biggest challenges of applying RL in the real world \\cite{dulac2021challenges}.\n \n\\subsection{Delayed and Sparse Rewards}\n \nOne common approach in long-term RL is to extract features from historical records, thus linking the delayed reward to behaviors in the past \\cite{hester2013texplore}. Learning with such algorithms is inefficient since learning from past experiences can only happen when the delayed outcomes become available. To address the delay, \\cite{mann2018learning} factorizes one state into an intermediate and a final state with independent transition probabilities and predicts each state at different intervals. \\cite{hung2019optimizing} describes a credit-assignment method that focuses on the most relevant memory records via content-based attention; the algorithm is capable of locating past memory to execute new tasks and generalizes very well. These approaches focus more on the delay in reward signal and less on sparsity. In our setup, the long-term reward is delayed, sparse and sporadic, making these approaches inapplicable. \n\nTo address sparsity of rewards, many model-based methods add intrinsic, intermediate rewards between sparse extrinsic reward signals. Such methods often adopt a supervised learning algorithm to predict next states and use the difference between the predicted and target state-action pair values as intrinsic reward. Although they propagate prediction inaccuracy into the future, they learn faster. For example, \\cite{hester2013texplore} separately trains many ``feature models'' to predict each feature of the next state as well as a ``reward model'' to predict reward. Between sparse extrinsic rewards, the algorithm samples estimated next state and reward from the models. The models are only updated when there is new input available. Their approach assumes that state features are independent and can be learned separately, and the accuracy of the reward model is still related to the sparsity of the reward signal. \\cite{pathak18largescale} uses a long-short-term memory (LSTM) to extract features from past memory that are more relevant to the current task, thus improving the model's generalization properties. The algorithm also uses two independent models to predict next state and action, the prediction accuracy becomes intermediate, intrinsic rewards inserted between sparse extrinsic rewards. In this approach, the intrinsic reward signal is not related to the extrinsic sparse reward and the final outcome of the game is not credited to specific agent behaviors. The lack of temporal credit assignment on a long time horizon affects learning efficiency \\cite{minsky1961steps}, especially with sparse rewards and conflict between the agent's short-term and long-term goals \\cite{khadka2018evolution,ijcai2020-368}, as is the case in our setup.\n\nThe credit assignment in \\cite{khadka2018evolution} does not directly credit behaviors, but credit a population of models, therefore it requires each model to play a full episode in each step to generate experience. It is not applicable in our setup: a dynamic multi-agent environment with no clear episodes. \\cite{ijcai2020-368} uses an attentional network to assign weights to past behaviors through reward-shaping. They focus on offline-learning of an independent credit assignment algorithm and decompose the long-term reward to densify reward signals; we need an online-learning algorithm in our dynamic environment, our learning agent learns more than just the credit assignment, and in our setup with conflicting short-term and long-term rewards, decomposed long-term rewards cannot be used directly to densify reward signals.\n\n\n\\section{System Model and Problem Formulation}\n\\label{sec:modelproblem}\n\n\\subsection{System model}\n\\label{sec:model}\n\nOur system adopts the classic edge cloud computing architecture: user-side vehicles request services such as semantic segmentation and motion planning; operating-side ACAs (e.g., road-side units or base station) control admission of service requests and assign them to different computing sites, which own resources and execute services~\\cite{whaiduzzaman2014survey} (Fig.\\ref{topo}). \nWe propose changes only to \\begin{inparaenum}[1)] \\item the algorithm admitting and assigning service requests and \\item the interaction mechanism. \\end{inparaenum} In addition, most signaling needs in our proposed approach are covered by the ISO 20078 standard on extended vehicle web services \\cite{iso20078}; additional fields required to pass bidding and final price information are straightforward to implement. Channel security is not the focus of this study.\n\nWe first define a \\emph{service request}; then, we explain in detail the user side and the operating side.\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\begin{minipage}{0.95\\linewidth}\n\t\\subcaptionbox{Example topology \\label{topo}}{\\includegraphics[width=0.24\\linewidth]{modelNew2-left.pdf}}\n\t\\subcaptionbox{Message sequence \\label{flow}}{\\includegraphics[width=0.72\\linewidth]{agentInteraction.pdf}}\n\t\\end{minipage}\n\t\\vspace{-0.2cm}\n\t\\caption{System model}\n\t\\label{model}\n\\end{figure*}\n\n\\subsubsection{Service request as bid}\n\\label{subsubsec:servicerequest}\n\nThe cloud-native paradigm decomposes services into tasks that can be deployed and scaled independently~\\cite{alliance2019service}. A service request comprises \\begin{inparaenum}[1)] \\item a task chain, with varying number, type, order and resource needs of tasks, and \\item a deadline. \\end{inparaenum} We consider a system with custom-tailored services placed at different computing sites in the network; the properties of these services are initially unknown to the computing sites. This enables us to extend the use cases into new areas, e.g., self-driving~\\cite{5gaausecase1,5gaausecase2}. We consider independent services, e.g., in self-driving, segmentation and motion planning can be requested independently. The corresponding class of the service is a \\emph{service type}. \n\nUsing motion planning as an example: every 100 milliseconds, the vehicle receives a service request of type ``motion planning'', which includes a task chain of two steps: localization and optimization. For execution, the vehicle uploads to the MEC the required input data (odometry, GPS, road image segments, etc.) that is estimated to be around 0.4Mbits. A high-definition map containing information of static object positions and labels can be stored on the access point and shared by all vehicles. After execution, the vehicle receives predicted optimal position and odometry for the next 3 seconds, estimated downlink size is ca. 6kbits \\cite{broggi2014proud}. The vehicle expects this result to be sent back within 100 milliseconds (service deadline). \n\nWe conceive of a vehicle's service request as a \\emph{bid} in an auction. Besides the service request details, a bid includes the bidding price and the vehicle's estimated resource needs. \n\n\\subsubsection{User side} \n\nOur study focuses on the behavior of the independent vehicles, conceived of as \\emph{agents}. They act independently and do not share information. As a bidder, the vehicle bids for a \\emph{commodity}---a service slot with necessary resources to execute the service request. The vehicle has a private \\emph{valuation} for each commodity (i.e.\\ the benefit it derives from winning the commodity), and its direct \\emph{payoff} from the auction is its valuation minus the price it pays the seller for the commodity. Aside from the direct payoff, it also has other costs, and the total \\emph{utility} is the sum of payoff and all costs. The vehicle's decision objective is to maximize average utility from joining the auction. If the vehicle bids a low price and loses, it suffers costs including transmission delay and communication overhead for bidding and rebidding; if it bids a high price and wins, it has reduced payoff. For a possibly lower cost or better payoff in the future, it can decide to join the auction at a later time (i.e.\\ to back off). However, if backoff is too long, the vehicle has pressure to pay more and prioritize its request. Therefore, the vehicle balances between two options: i) back off and try later or ii) submit the bid immediately to the ACA unit for admission check. With the backoff option, \\begin{inparaenum}[1)] \\item vehicles are incentivized to balance between backoff and bidding through a cost factor; \\item backoff time is learned, not randomly chosen; \\item learning is based on vehicle state information (Sec.\\ref{payment}).\\end{inparaenum}\n\nFor example: when a vehicle receives a service request of the type ``motion planning'', it collects past bidding results and current environment parameters e.g., the number of other vehicles in the vicinity. Then it inputs service-specific information (deadline, estimated resource needs, input and output data size, etc.), historical data, and environment parameters to its onboard learning model, to infer the best bidding strategy for the motion planning request at the current time step. Transmission delay is calculated based on input and output data size. \n\nWe study the learning algorithm in each vehicle. We use passive, non-learning vehicles as benchmark, to quantify the effect of learning on performance. Learning essentially sets the priority of a service request. This priority is used by the ACA to order requests; it is simply constant for non-learning agents, resulting in first-in, first-out processing order.\n\n\\subsubsection{Operating side} \n\nThe ACA unit and computing sites are the operating side (Fig.\\ref{flow}). The ACA unit decides to admit or reject ordered service requests. Upon admission, it assigns the request to a computing site according to a load-balancing policy. Due to information delay, execution uncertainty, system noise, etc., the resource utilization information at different sites is not immediately available to the ACA unit. If all computing sites are overloaded, service requests are rejected. For a rejected request, a vehicle can rebid a maximum number of times. If the request is admitted but cannot be executed before its deadline, the computing site drops the service and informs the ACA unit. Vehicles receive feedback on bidding and execution outcome, payment, and resource utilization (Sec.\\ref{payment}).\n\nThe operating side does not have \\emph{a priori} knowledge of the type, priority, or resource requirements of service requests. For example, once a site receives a previously unknown service, it uses an estimate of resource needs provided by the vehicle. Over time, a site updates this estimate from repeated executions of the same service. This enables a computing site to execute previously unseen service requests, based on a simple statistical estimation of resource needs. Extension to a more sophisticated form of learning is left to future work. \n\nThe total service time of a request is the sum of processing, queueing, and transmission time. Each computing site may offer all services but with different resource profiles (i.e., amount and duration needed of CPU and memory), depending on the site's configuration. Site capacity is specified in abstract time-resource units: one such unit corresponds to serving one volume of request in one time unit at a server, when given one resource unit (in Sec. \\ref{sec:eval}, we explain the detailed assumptions in simulation).\n\n\n\n\\subsection{Problem formulation}\n\\label{sec:problem}\n\n\\begin{table}[t]\n \\centering\n \\captionof{table}{Sec.\\ref{sec:problem} and \\ref{payment} symbol definition}\n \\label{tab:problem}\n \\begin{tabular}{c l c l c l c l c l c }\n Sym & Description & Sym & Description & Sym & Description\\\\\n \\toprule\n $k \\in K$ & service type\/commodity & $n_k$ & $k$'s availability & $i \\in I$ & service request\/bid\\\\\n $m \\in M$ & vehicle\/bidder & $h \\in H$ & resource types & $\\omega_{i,h}$ & $i$'s requirement of $h$\\\\\n $B$ & wealth\/budget & $v$ & bid value & $\\beta$ & utilization\\\\\n $Q$ & service deadline & $\\alpha$ & backoff decision & $b$ & bidding price\\\\\n $c$ & cost to join the auction & $q$ & backoff cost & $p$ & payment\\\\\n $z$ & bidding outcome & $u$ & immediate utility & $U$ & cumulated utility\\\\\n \\bottomrule \n \\end{tabular}\n\\end{table} \n\nTable~\\ref{tab:problem} summarizes the notation for this section. Let $M$ be the set of vehicles (bidders) and $K$ the set of commodities (service types), each type with total of $n_k^t$ available service slots at time $t$ in computing sites. Bidder $m$ has a reserve pool of wealth with an initial wealth of $B_m^0$. It has at most $1$ demand for each service type $k \\in K$ at $t$, denoted by $m_k^t \\in \\{0,1\\}$. It draws its actions for each service---whether to back off $\\mathbf{\\alpha}_m^t =\\{\\alpha^t_{m,1}, \\cdots, \\alpha^t_{m,|K|} \\} \\in \\{0,1\\}^{|K|}$, and which price to bid $\\mathbf{b}_m^t =\\{b^t_{m,1}, \\cdots, b^t_{m,|K|} \\} \\in \\mathbb R_+^{|K|}$---from a strategy. The bidding price is some unknown function $f_m$ of $m$'s private valuation of the service type $v_{m,k} \\in \\mathbb R_+$ and lower than or equal to the current amount $B^t$ in the reserve pool: $b_{m,k}^t=f_m(v_{m,k})$. The competing bidders draw their actions from a joint distribution $\\pi_{-m}^t$ based on $(\\mathbf{p}^1,\\cdots, \\mathbf{p}^{t-1})$, where $\\mathbf{p}^t \\in \\mathbb R_+^{|K|}$ is the payment vector received at the end of time $t$, its element $p_k^t$ is the $(n_k^t+1)^{\\textrm{th}}$ highest bid for service type $k$. If $m$ wins the bid for $k$, bidding outcome $z_m^t=1$, $m$ observes the new $\\mathbf{p}^t$ as feedback, and receives an immediate utility $u_m^t$, which is a function of $m$'s private value $v_{m,k}^t$ of $k$, its bidding price $b_{m,k}^t$, and $p_k^t$; all losing bidders suffer $c_m^t$ as cost to join the auction, and bidders that backed off suffer $q_m^t$ as cost of backoff. We therefore write the immediate utility as $u_m^t=g(v_m^t,\\mathbf{b}_m^t,z_m^t,\\mathbf{p}^t,c_m^t,q_m^t)$. The auction repeats for $T$ periods. The goal is to maximize long-term cumulated utility: $U=\\frac{1}{T} \\sum\\limits_{t=1}^T \\sum\\limits_{m \\in M} u_m^t, T\\to \\infty$.\n\nFor any $k$, when availability $n_k^t<\\sum\\limits_{m \\in M} m_k^t$, there is more demand than available service slots and we call it ``high contention''. When $n_k^t \\geq \\sum\\limits_{m \\in M} m_k^t$, we call it ``low contention''. In a dynamic environment, available service slot $n_k^t$ depends on utilization at $t-1$ and existing demand at $t$. Our setup imitates the noise and transmission delay in a realistic environment, which makes site utilization information outdated when it becomes available to the ACA unit for admission control (Fig.\\ref{flow}). In Sec.\\ref{sec:eval}, we demonstrate the algorithms' ability to learn despite outdated information.\n\nIdeally, an auction is incentive-compatible. Unfortunately, with budget constraint and costs, the second-price auction considered here is no longer incentive-compatible. But we still use this type of auction as we have shown in our previous work \\cite{tan2022multi} that it maximizes social welfare and optimally allocates resources. We also use the payment signal as additional feedback from ACA to aid bidders' learning process (Sec.\\ref{modelDescription}).\n\n\\section{Proposed solution}\n\\label{sec:solution}\n\nTo solve the long-term reward maximization described in Sec.\\ref{sec:problem}, we propose MALFOY: \\textbf{M}ulti-\\textbf{A}gent reinforcement \\textbf{L}earning \\textbf{FO}r sparse and dela\\textbf{Y}ed reward; its ability to learn based on rewards with random delay makes it an extension to our previous work on short-term algorithms \\cite{tan2022multi}. With this extension, the algorithm is generalized to target a wider range of problems, and the problem tackled in \\cite{tan2022multi} becomes a special case where the long-term reward signals have an interval of $1$ (i.e.\\ available at the end of every auction round).\n\nIn Sec.\\ref{payment} we define a bidder's utility function and briefly explain the mechanism's theoretical properties in the static case. Then, we introduce MALFOY for the dynamic environment in Sec.\\ref{fsp}-\\ref{creditassign}. \n\n\\subsection{Utility function}\n\\label{payment}\n\nIn this section, we first build up the utility function based on the payoff of classic second-price auction. Then, we add costs for backoff and losing the bid, incentivizing tradeoff between higher chance of success and lower communication overhead. Finally, we add the system resource utilization goal to the utility. \n\nIn each auction round, if a bid $i$ for service type $k$ is admitted, its economic gain is $(v_{i,k}-p_{i,k})$. For each $k$, the bidder has a given private valuation $v_{i,k}$ that is \\begin{inparaenum}[1)] \\item linear to the bidder's estimated resource needs for the service request and \\item within its initial wealth $B_m^0$. \\end{inparaenum} The first condition guarantees Pareto optimality, the second condition avoids overbidding under rationality \\cite{tan2022multi}. Our study does not consider irrational or malicious bidders, e.g., whose goal is to reduce social welfare even if individual outcome may be hurt. \n\nACA records $b_{j,k}$ of the highest losing bid for each $k$, and sets the price to $p_{i,k}=b_{j,k}$. For $n_k$ available service slots, this would be the $n_k+1$th highest bidding price. For $n_k=1$, this would be the price of the second highest bid. Hence the name ``second-price auction''. If $i$ is admitted, the vehicle receives a payoff of $v_{i,k}-p_{i,k}$. If $i$ is rejected, it has a constant cost of $c_{i,k}$. The bidder's utility $\\mathcal{u}_{i,k}$ so far:\n\n\\begin{flalign}\\label{eq:uik}\n\\mathcal{u}_{i,k} =z_{i,k} \\cdot (v_{i,k}-p_{i,k})-(1-z_{i,k}) \\cdot c_{i,k}\n\\end{flalign}\n\n\\noindent where $z_{i,k}=1$ means bidder wins bid $i$ for a service slot of service type $k$, which implies $b_{i,k}$ is among the highest $n_k$ bids for $k$. Ties are broken randomly.\n\nWe add $\\alpha_{i,k} \\in \\{0,1\\}$ for backoff decision: bidder submits the bid if $\\alpha_{i,k}=1$, otherwise, it backs off with a cost $q_{i,k}$:\n\n\\begin{flalign}\\label{eq:rewardbackoff}\nu_{i,k} = \\alpha_{i,k} \\cdot (\\mathcal{u}_{i,k} -\\mathbf{1}|_{p_{i,k}=0} \\cdot v_{i,k}) + (1-\\alpha_{i,k}) \\cdot q_{i,k}\n\\end{flalign}\n\n\\noindent where $\\mathbf{1}|_\\text{conditions}=1$ if the conditions are true, otherwise $0$.\n\nEspecially in high contention, more rebidding causes communication overhead, but less rebidding reduces the chance of success. With $c_{i,k}$, the utility incentivizes less rebidding to reduce system-wide communication overhead (\\textbf{C1}). Together with $q_{i,k}$, the bidder is incentivized to trade off between long backoff time and risky bidding. In our implementation (Sec.\\ref{sec:eval}), $\\alpha$ is continuous between $0$ and $1$ and linear in the backoff duration.\n\nTo further align bidder objectives with system overall objectives (\\textbf{C1}), we include system resource utilization $\\beta$ in the utility. This is to incentivize bidders to minimize system utilization. Hence, the complete utility definition is:\n\n\\begin{flalign}\\label{eq:reward1}\nu_i = \\sum\\limits_{k \\in K} u_{i,k} + W \\cdot (1-\\beta)\n\\end{flalign}\n\n$W$ is a constant that weighs the utilization objective. In low contention, there is adequate resource to accept all bids, bidding price is less relevant, and backoff decision becomes more important.\n\nTo calculate Eq.\\ref{eq:reward1}, the bidder needs only these feedback signals: bidding outcome $z_{i,k}$, payment $p_{i,k}$ and system utilization $\\beta$, addressing \\textbf{C2}.\n\nIn our previous work \\cite{tan2022multi}, we provided a short version of our proof that \\begin{inparaenum}[1)] \\item the outcome of the game is an NE and a maximization of social welfare; and \\item in high contention with resource capacity limit, the outcome is also an optimal resource allocation (i.e.\\ Pareto optimality). \\end{inparaenum} In this study, we provide the full proof in Appendix (Sec.\\ref{appendix:SPAwithpenalty} and \\ref{appendix:paretoOptimal}). \n\nIn a dynamic environment, MALFOY learns to achieve reward maximization using the utility function. The algorithm consists of three parts: \\begin{inparaenum}[1)] \\item the fictitious self-play (FSP) (Sec.\\ref{fsp}), including an RL (Sec.\\ref{modelDescription}) and a supervised learning (SL) model; \\item the curiosity learning model (Sec.\\ref{subsec:longterm}); and \\item the credit assignment (Sec.\\ref{creditassign}). \\end{inparaenum} RL seeks to learn the best-response strategy in a huge state-action space by balancing between exploitation and exploration. To improve its convergence properties, a FSP is wrapped around the RL to stabilize the learning process. To further improve the model's generalization properties and learning efficiency with sparse extrinsic reward signal, we add a curiosity learning model to the FSP. Finally, to enhance the model's ability to learn from long-term, delayed extrinsic rewards, we add a credit assignment model to the FSP that attributes the long-term reward to short-term actions.\n\nIn our previous work \\cite{tan2022learning}, we simulated two common repeated auctions with single commodity and let three types of algorithms compete directly against each other: the short-term FSP algorithm, the long-term FSP with curiosity learning, and the long-term FSP with both curiosity learning and credit assignment (same as MALFOY). Our results showed that MALFOY outperformed all others.\n\nIn the following sections, we explain the parts shown in Fig.\\ref{attentionchart} in detail. Table \\ref{tab:fsp} summarizes the notation.\n\n\\subsection{The FSP method}\n\\label{fsp}\n\n\t\\begin{table}[t]\n\t \\centering\n\t \\captionof{table}{Sec.\\ref{fsp}-\\ref{creditassign} symbol definition}\n\t \\label{tab:fsp}\n\t \\begin{tabular}{c l c l c l c | c | c}\n\t Sym & Description & Sym & Description & Sym & Description\\\\\n\t \\toprule\n\t $\\zeta$ & best response & $\\psi$ & behavioral strategy & $\\mathbf e_m$ & env. variables\\\\\n\t $\\rho$ & private bidder info & $\\mathbf{a}$ & action, $\\mathbf{a}=(\\alpha, b)$ & $P_{-m}^t$ & other bidders state\\\\\n\t $\\text{sl}_m^t$ & SL present state & $\\text{rl}_m^t$ & RL present state & $S_m^t$ & RL complete state\\\\\n\t $\\lambda$ & $\\bar{u}$'s weight factor & $\\theta$ & actor parameters & $\\mathbf w$ & critic parameters\\\\\n\t $\\gamma$ & learning rate & $\\delta$ & TD error & $\\eta$ & $\\zeta$'s weight\\\\\n\t $\\nu$ & history length & $\\mu$ & action mean & $\\Sigma$ & action covariance\\\\\n\t $\\phi$ & featurized state & $\\epsilon$ & credit assign. weight & $r_{i}$ & intrinsic reward\\\\\n\t $r_{e}$ & extrinsic reward & $L_{f}$ & forward mdl loss & $L_{i}$ & inverse mdl loss\\\\\n\t $\\xi$ & reward weight\\\\\n\t \\bottomrule \n\t \\end{tabular}\n\t\\end{table}\n\t\\hfill\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\linewidth]{attentionRNN2.pdf}\n\t\t\\caption{Long-term algorithms running at each vehicle}\n\t\t\\label{attentionchart}\n\t\\end{figure}\n\nThe fictitious self-play (FSP) method addresses the convergence challenge of a best-response algorithm (\\textbf{C3}). FSP balances exploration and exploitation by replaying its own past actions to learn an average behavioral strategy regardless of other bidders' strategies; then, it cautiously plays the behavioral strategy mixed with best response \\cite{heinrich2015fictitious}. The method consists of two parts: \\begin{inparaenum}[1)] \\item a supervised learning (SL) algorithm predicts the bidder's own behavioral strategy $\\psi$, and \\item an RL algorithm predicts its best response $\\zeta$ to other bidders. \\end{inparaenum} The bidder has $\\eta,\\lim\\limits_{t \\to \\infty} \\eta =0$ probability of choosing action $\\mathbf{a}=\\zeta$, otherwise it chooses $\\mathbf{a}=\\psi$. The action includes backoff decision $\\alpha$ and bidding price $b$. If $\\alpha$ is above a threshold, the bidder submits the bid; otherwise, the bidder backs off for a duration linear in $\\alpha$. We predefine the threshold to influence bidder behavior: with a higher threshold, the algorithm becomes more conservative and tends to back off more service requests. Learning the threshold (e.g., through meta-learning algorithms) is left to future work.\n\nAlthough FSP only converges in certain classes of games \\cite{LESLIE2006285} (and in our case of a multi-player, general-sum game with infinite strategies, it does not necessarily converge to an NE), it is still an important experiment as our application belongs to a very general class of games; and empirical results show that by applying FSP, overall performance is greatly improved compared to using only RL. The FSP is described in Alg.\\ref{algorithm}. \n\nInput to SL includes bidder $m$'s service requests---service type, resource amount required, and deadline: $\\rho_m^t=\\{(k_i, \\omega_{i,h},Q_{i})| i \\in I, h \\in H\\}$ ($m$ can create multiple bids, each an independent request for service type $k_i$; $\\rho_m^t$ is the set of all $m$'s bids at $t$), current environment information visible to $m$, denoted $e_m^{t}$ (e.g., number of bidders in the network and system utilization $\\beta^t$), and other bidder conditions, e.g. initial wealth $B^0$, and current wealth $B^t$. SL infers behavioral strategy $\\psi_m^t$. The input $\\text{sl}_m^t=(\\rho_m^t,e_m^t)$ and actual action $\\mathbf{a}_m^t$ are stored in SL memory to train the regression model. we use a multilayer perceptron in our implementation. \n\nInput to RL: $\\phi_m^t$ is a featurized state vector from the original state vector $S_m^t$. The feature extraction module is part of curiosity model (Sec.\\ref{subsec:longterm}); it extracts features that are most relevant to the agent's actions. In our algorithm, $S_m^t$ is constructed from $m$'s present state $\\text{rl}_m^t$. $\\text{rl}_m^t$ includes \\begin{inparaenum}[1)] \\item $\\rho_m^t$; \\item $e_m^t$; \\item previous other bidders' state $P_{-m}^{t-1}$, represented by the final price $p_k$, or $P_{-m}^t=\\mathbf{p}^t=\\{ p_k^t| k \\in K \\}$; and \\item calculated utility $u_m^{t-1}$ according to Eq.\\ref{eq:reward1}. \\end{inparaenum} To consider historical records, we take $\\nu$ most recent states to form the complete state vector: $S_m^t=\\{\\text{rl}_m^\\tau|\\tau=t-\\nu+1,\\cdots,t\\}$. Thus, input data consists mostly of information private to the user $m$, and the environment data, as well as past prices, are easily obtainable public information (\\textbf{C2}). RL outputs best response $\\zeta_m$. We provide a detailed description of the RL algorithm below.\n\n\\begin{figure}[t]\n\\begin{minipage}[t]{0.48\\linewidth}\n\t \\begin{algorithm}[H]\n\t \\small\n\t \\begin{algorithmic}[1]\n\t \\STATE Initialize $\\psi_m,\\zeta_m$ arbitrarily,$\\nu,t=1,\\eta=1\/t,P_{-m}^{t-1}=\\mathbf{0}, u_m^{t-\\nu+1},\\cdots,u_m^{t-1}=0$, observe $e_m^t$, create $\\text{rl}_m^t,\\text{sl}_m^t$ and add to memory\n\t \\WHILE{true}\n\t \\STATE Take action $\\mathbf{a}_m^t=(1-\\eta)\\psi_m^t+\\eta \\zeta_m^t$\n\t \\STATE Receive $P_{-m}^t$, calculate $u_m^t$, observe $\\rho_m^{t+1},\\mathbf e_m^{t+1}$\n\t \\STATE Create and add state to RL memory: $\\text{rl}_m^{t+1}$\n\t \\STATE Create and add state to SL memory: $(\\text{sl}_m^{t+1},\\mathbf{a}_m^t)$\n\t \\STATE Construct $S_m^t,S_m^{t+1}$\n\t \\STATE Get $\\phi_m^t,\\phi_m^{t+1},r_{i,m}^t=\\text{Curiosity}(S_m^t,S_m^{t+1},\\mathbf{a}_m^t)$\n\t \\STATE Get $\\zeta_m^{t+1}=\\text{RL}(\\phi_m^t,\\phi_m^{t+1},r_{i,m}^t)$\n\t \\STATE Get $\\psi_m^{t+1}=\\text{SL}(\\text{sl}_m^{t+1})$\n\t \\STATE $t \\gets t+1$, $\\eta \\gets 1\/t,\\zeta_m^{t} \\gets \\zeta_m^{t+1},\\psi_m^{t} \\gets \\psi_m^{t+1}$\n\t \\ENDWHILE\n\t \\end{algorithmic}\n\t \\caption{FSP algorithm for bidder $m$}\n\t \\label{algorithm}\n\t \\end{algorithm}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.48\\linewidth}\n\t \\begin{algorithm}[H]\n\t \\small\n\t \\begin{algorithmic}[1]\n\t \\STATE Initialize $\\theta, w$ arbitrarily. Initialize $\\lambda$\n\t \\WHILE{true}\n\t \\STATE Input $t$ and $\\phi_m^t,\\phi_m^{t+1}$\n\t \\STATE Run critic and get $\\hat V(\\phi_m^{t}, \\mathbf w),\\hat V(\\phi_m^{t+1},\\mathbf w)$\n\t \\STATE Calculate $\\bar r_{i,m}=\\lambda \\bar r_{i,m}$ and $\\delta$\n\t \\STATE Run actor and get $\\mu(\\theta), \\Sigma(\\theta)$\n\t \\STATE Sample $\\zeta_m^{t+1}$ from $F(\\mu,\\Sigma)$, update $\\mathbf w$ and $\\theta$\n\t \\ENDWHILE\n\t \\end{algorithmic}\n\t \\caption{RL algorithm for bidder $m$}\n\t \\label{algorithmRL}\n\t \\end{algorithm}\n\\end{minipage}\n\\end{figure}\n\n\\subsection{The RL Algorithm}\n\\label{modelDescription}\n\nAuthors of \\cite{khaledi2016optimal} use VCG and a learning algorithm for the bidders to adjust their bidding price based on budget and observation of other bidders. Our approach is similar in that we estimate other bidders' state $P_{-m}$ from payment information and use the estimate as basis for a policy. Also, similar to their work, payment information is only from the seller.\n\nOur approach differs from \\cite{khaledi2016optimal} in several major points. We use a continuous space for bidder states (i.e., continuous value for payments). As also mentioned in \\cite{khaledi2016optimal}, a finer-grained state space yields better learning results. Moreover, we consider multiple commodities, which is more realistic, and therefore has a wider range of applications. Further, we do not explicitly learn the transition probability of bidder states. Instead, we use historical states as input and directly determine the bidder's next action.\n\nWe use the actor-critic algorithm \\cite{sutton2018reinforcement} for RL (Alg.\\ref{algorithmRL}). The \\textbf{critic} learns a state-value function $V(\\phi)$. Parameters of the function are learned through a neural network that updates with $\\mathbf w \\gets \\mathbf w + \\gamma^w\\delta \\nabla \\hat V(\\phi, \\mathbf w)$, where $\\gamma$ is the learning rate and $\\delta$ is the temporal difference (TD) error. For a continuing task with no terminal state, no discount is directly used to calculate $\\delta$. Instead, the average reward is used \\cite{sutton2018reinforcement}: $\\delta =r-\\bar r+\\hat V(\\phi',\\mathbf w) - \\hat V(\\phi,\\mathbf w)$. In our case, the reward is intrinsic reward $r_{i,m}$, which is utility $u_m$ weighted by their importance to the delayed extrinsic reward through weight vector $\\epsilon$ from the credit assignment model (Sec.\\ref{creditassign}). We use exponential moving average (with rate $\\lambda$) of past rewards as $\\bar r$.\n\nThe \\textbf{actor} learns the parameters of the policy $\\pi$ in a multidimensional and continuous action space. Correlated backoff and bidding price policies are assumed to be normally distributed: $F(\\mu,\\Sigma) = \\frac{1}{\\sqrt{|\\Sigma|}} \\exp(-\\frac{1}{2}(\\mathbf x-\\mu)^T\\Sigma^{-1}(\\mathbf x - \\mu))$. For faster calculation, instead of covariance $\\Sigma$, we estimate lower triangular matrix $L$ ($LL^T=\\Sigma$). Specifically, the actor model outputs the mean vector $\\mu$ and the elements of $L$. Actor's final output $\\mathbf{\\zeta}$ is sampled from $F$ through: $\\mathbf{\\zeta} = \\mu + L\\mathbf{y}$, where $\\mathbf{y}$ is an independent random variable from standard normal distribution. Update function is $\\theta \\gets \\theta + \\gamma^\\theta \\delta \\nabla \\ln \\pi(\\mathbf{a}|S,\\theta)$. We use $\\frac{\\partial \\ln F}{\\partial \\mu} =\\Sigma (\\mathbf x-\\mu)$ and $\\frac{\\partial \\ln F}{\\partial \\Sigma} = \\frac{1}{2} (\\Sigma(\\mathbf x-\\mu)(\\mathbf x-\\mu)^T\\Sigma-\\Sigma)$ for back-propagation.\n\nThe RL's objective is to find a strategy that, given input $\\phi_m^t$, determines $\\mathbf{a}$ to maximize $\\frac{1}{T-t}\\mathbb{E}[\\sum_{t'=t}^T r_{i,m}^{t'}]$. To implement the actor-critic RL, we use a stacked convolutional neural network (CNN) with highway \\cite{srivastava2015training} structure similar to the discriminator in \\cite{yu2017seqgan} for both actor and critic models. The stacked-CNN has diverse filter widths to cover different lengths of history and extract features, and it is easily parallelizable, compared to other sequential networks. Since state information is temporally correlated, such a sequential network extracts features better than multilayer perceptrons. The highway structure directs information flow by learning the weights of direct input and performing non-linear transform of the input.\n\nIn low contention, authors of \\cite{perkins2014game} prove that an actor-critic \\cite{sutton2018reinforcement} RL algorithm converges to Nash equilibrium (NE) in a potential game. In high contention, although we prove the existence of an NE in the static case, the convergence property of our algorithm in a stochastic game is not explicitly analyzed. We show it through empirical results in Sec.\\ref{sec:eval}.\n\nNext, we describe the curiosity learning and credit assignment models in detail, which are key to the long-term algorithm.\n\n\\subsection{The Curiosity Model}\n\\label{subsec:longterm}\n\n\\begin{figure}[t]\n\\begin{minipage}[t]{0.48\\linewidth}\n\t \\begin{algorithm}[H]\n\t \\small\n\t \\begin{algorithmic}[1]\n\t \\STATE Initialize model parameters, $\\epsilon$ arbitrarily. Initialize $\\xi$\n\t \\WHILE{true}\n\t \\STATE Input $a_m^t$ and $S_m^t,S_m^{t+1}$ constructed from RL memory\n\t \\STATE Run feature extraction, get $\\phi_m^t$ and $\\phi_m^{t+1}$\n\t \\STATE Run forward model, get $\\hat \\phi_m^{t+1}$, calculate $L_f$\n\t \\STATE Run inverse model, get $\\hat a_m^t$, calculate $L_i$\n\t \\STATE Update model parameters\n\t \\STATE Infer from credit assignment, extract $\\epsilon_m^t$ from attention layer\n\t \\STATE Calculate and output $r_{i,m}^t$\n\t \\ENDWHILE\n\t \\end{algorithmic}\n\t \\caption{Curiosity learning algorithm}\n\t \\label{algorithmcuriosity}\n\t \\end{algorithm}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.48\\linewidth}\t \n\t \\begin{algorithm}[H]\n\t \\small\n\t \\begin{algorithmic}[1]\n\t \\STATE Initialize model parameters arbitrarily, initialize batch size $\\nu$\n \\STATE Input $r_{e,m}^t$ and $S_m^t,\\cdots,S_m^{t-\\nu+1}$, $u_m^t,\\cdots,u_m^{t-\\nu+2}$ from RL memory\n\t \\STATE Run feature extraction and get $\\phi_m^t,\\cdots,\\phi_m^{t-\\nu+1}$\n\t \\FOR{$\\tau \\gets t-\\nu+1$ to $t-1$}\n\t \\STATE Input $\\phi_m^\\tau$ to encoder, get encoder output $\\text{enc}_o$\n\t \\STATE Input $\\text{enc}_o,u_m^{\\tau+1}$ to decoder, get output $\\text{dec}_o^\\tau$\n\t \\ENDFOR\n\t \\STATE Input $\\phi_m^t$ to encoder, get $\\text{enc}_o$\n\t \\STATE Input $\\text{enc}_o,r_{e,m}^t$ to decoder, get $\\text{dec}_o^t$\t\n\t \\STATE Update model params, output $\\epsilon_m^t$ from attention layer\n\t \\end{algorithmic}\n\t \\caption{Credit assignment algorithm}\n\t \\label{algorithmattention}\n\t \\end{algorithm}\n\\end{minipage}\n\\end{figure}\n\nOur curiosity model is based on the vanilla model from \\cite{pathak2017curiosity}. They use feature extraction to identify features that can be influenced by the agent's actions, thus improving the model's generalization properties in new environments. In our competitive and dynamic environment, next state depends not only on the current state, but on a number of historical states. We therefore extract features from current and historical records $S_m^t$. The resulting featurized state vector $\\phi_m^t=\\text{feature}(S_m^t)$ is also the input to the RL (Sec.\\ref{modelDescription}) and the credit assignment (Sec.\\ref{creditassign}). \n\n\\cite{pathak2017curiosity} uses a forward model and an inverse model to predict next state and next action, respectively. These are supervised learning models with the objective to minimize loss $L_f=\\| \\phi_m^t-\\hat\\phi_m^t \\|_2^2$ and $L_i=\\| \\mathbf{a}_m^t-\\hat{\\mathbf{a}}_m^t \\|_2^2$. One of the objectives of the forward and inverse models is to improve prediction accuracy of the consequence of the agent's actions, even without any reward signal. In our game setup, we have short-term intrinsic reward signals (only not aligned and potentially conflicting with the extrinsic rewards); therefore, we adapt the input to include the previous intrinsic reward values, and the forward model's objective is to improve prediction accuracy of both the state and the intrinsic reward. \n\nIn \\cite{pathak2017curiosity}, the intrinsic reward is the weighted loss of the forward model: $r_{i,m}^t=\\xi L_f$, and the bigger the forward loss, the higher the intrinsic reward. Through the adversarial design, the model is encouraged to explore state-actions where the agent has less experience and prediction accuracy is low. The intrinsic rewards are inserted between sparse extrinsic rewards to improve learning efficiency despite the sparseness---the authors of \\cite{pathak2017curiosity} call this internal motivation ``curiosity-driven exploration''. In our approach, we apply the same method with a modified intrinsic reward definition: $r_{i,m}^t=\\xi L_f^t + (1-\\xi) \\epsilon u_{m}^t$, where $\\xi$ is a predefined weight factor to balance between the two short-term objectives, and $\\epsilon$ is a weight factor from the credit assignment model (see below). The objective is to maximize: $\\mathbb{E}_\\pi [\\sum_t r_{i,m}^t]-L_i-L_f$. Pseudo code is in Alg.~\\ref{algorithmcuriosity}. \n\n\\subsection{Credit Assignment Model}\n\\label{creditassign} \n\nThe credit assignment model uses a sequential network (recurrent neural network as encoder and decoder) with an attention layer. Typically, such a sequential network is used to identify correlation between sequenced input elements $\\text{enc}_i$ and predict a corresponding sequence of output elements $\\hat{\\text{dec}}_o$. The sequential network is enhanced with an attention layer, which establishes relationship between any elements in the sequence, regardless of the distance between them. Our credit assignment model is inspired by \\cite{ijcai2020-368}, our model is different in that we do not decompose the extrinsic reward. \n\nIn our credit assignment model, we are not interested in predicting $\\hat{\\text{dec}}_o$. Instead, we want to determine the contribution of each state-action pair towards the final extrinsic reward $r_{e,m}^t$. Therefore, we trigger the training of the credit assignment model only when there is a new signal $r_{e,m}^t$ at time $t$: this signal becomes the last element of the target vector. We train the model on the batch of $\\nu$ featurized state vectors $\\text{enc}_i=\\{\\phi_m^{t-\\nu+1},\\cdots,\\phi_m^{t}\\}$ with both short- and long-term rewards as target vector, $\\text{dec}_o = \\{u_{m}^{t-\\nu+2},\\cdots,u_{m}^t,r_{e,m}^t\\}$. In time step $\\tau \\in [t-\\nu+1,t]$, the attention layer generates a weight vector corresponding to input vector $\\text{enc}_i$, marking its relevance to the current output prediction $\\hat{\\text{dec}}_o^\\tau$, until in the last time step $t$, the attention layer outputs a weight vector $\\epsilon_m^t=\\{\\epsilon_1,\\cdots,\\epsilon_\\nu|\\sum_{i=1}^n \\epsilon_i=1\\}$ corresponding to $\\text{enc}_i$ that marks their relevance to the last output $r_{e,m}^t$. Model parameters are updated with the mean square error between the generated output $\\hat{\\text{dec}}_o$ and target vector $\\text{dec}_o$.\n\nThe weight vector $\\epsilon_m^t$ is then multiplied with the original utilities $u_m^t$. Through $\\epsilon_m^t$, short- and long-term rewards are aligned, even if they are conflicting in nature. Between sparse extrinsic rewards, only the forward network of credit assignment model is run to infer a weight vector.\n\nThe features that make our algorithm truly long-term are: \\begin{inparaenum}[1)] \\item reward prediction, \\item more exploration in the early stages of learning, and \\item short- and long-term reward alignment through credit assignment. \\end{inparaenum} Points 1) and 2) are achieved through an adapted curiosity model (Sec.\\ref{subsec:longterm}). Point 3) is achieved through a hierarchical structure that uses an attentional network to learn and assign weights to short-term rewards based on their relevance to the long-term, sparse extrinsic reward; the learning process is only triggered when a new extrinsic reward becomes available (Sec.\\ref{creditassign}). Between the extrinsic reward signals, the FSP+curiosity model learns to better predict next states, actions, and intrinsic rewards (\\textbf{C4}). \n\nIn our setup, only the extrinsic reward is delayed; for the intrinsic reward, we measure offloading failure at the time of task admission. However, our algorithm can also learn with delayed intrinsic rewards, e.g., if the measurement of offloading failure is after task execution. For the sake of simplicity, we assume that failure rate measured before and after actual task execution is the same. We verify this assumption in the next section, where we show that applying our solution, we reach a system responsiveness \\cite{avizienis2004basic} of 99\\% (i.e. 99\\% of the admitted jobs at MEC are successfully processed before their deadlines).\n\n\n\\section{Evaluation}\n\\label{sec:eval}\n\nWe develop a Python discrete-event simulator, with varying number of vehicles of infinite lifespan, one MEC with ACA and edge computing site, and one remote computing site (extension to multiple ACA units and computing sites is left to future work). The edge and remote sites have different resource profiles. To imitate a realistic, noisy environment, the remote site is some distance to the ACA unit, such that data transmission would cause non-negligible delay in state information update. We also add a noise to the delay and to the actual resource need that is independently drawn from a normal distribution (we define the parameters of the normal distributions before the simulation. The analysis of the impact of different simulation parameters is left to future work). Each vehicle is randomly and independently initialized with a budget of ``high'' or ``low'' with 50\\% probability. For the operating-side load-balancing policy, we apply state-of-the art resource-intensity-aware load-balancing (RIAL) \\cite{8006307} with slight modifications. The method achieves dynamic load-balancing among computing sites through resource pricing that is correlated to the site's load, and loads are shifted to ``cheaper'' sites. The queueing time and processing time of each service request is initialized with a constant, and as the computing site processes more of the same service type, the estimated queueing and processing time is drawn from the empirical distribution of past observations. Finally, we compare the performance of active agents (MALFOY on the user side, RIAL on the operating side, M+R) to passive agents (only RIAL on the operating side), as shown in Fig.\\ref{flow}. Evaluation data is collected from additional evaluation runs after the models are trained, with random incoming service requests newly generated by a two-state Markov-modulated Poisson process (MMPP) \\cite{wang2013characterizing}. \n\nWe evaluate the following metrics:\n\\begin{itemize}\n\\item \\textbf{Offloading failure rate (OFR):} Ratio of offloading requests rejected by ACA during admission control. As mentioned in Sec. \\ref{creditassign}, we observe M+R's responsiveness of $99\\%$, which is consistently higher than RIAL for all results in the paper. Therefore, this is a close approximation of the ratio of failed offloading requests that are either rejected, or not executed within deadline.\n\\item \\textbf{Resource utilization:} Ratio of resources effectively utilized at computing sites = (sum of utilized resource units in all resource types and all computing sites in the current time step) \/ (sum of total resource units in all resource types and all computing sites at any time). \n\\item \\textbf{Rebidding overhead:} If a bid is rejected before deadline, the vehicle can bid again. More rebidding causes communication overhead, but less rebidding reduces the chance of success. We study this tradeoff, comparing the average number of actual rebiddings per vehicle within maximum permitted-rebidding (MP).\n\\end{itemize}\n\n\\begin{figure*}[t]\n\t\\centering\n\t\\subcaptionbox{Failure rate comparison with MP of 1 and 5 times, respectively. The x-axis is system resource capacity: to the right, system capacity increases, creating a low-contention scenario. M+R\\_1 reduces OFR by $40\\%$, achieves $1\\%$ OFR in low contention; RIAL only reaches $2\\%$.\\label{successbycapa}}{\\includegraphics[width=0.45\\linewidth]{Failureratewithdifferentresourcecapacity.pdf}}\\hspace{1em}\n\t\\subcaptionbox{Comparison of resource capacity needs, when given OFR service level requirements. X-axis is required OFR level: to the right, stricter requirement of low failure rate applies, and resource capacity is increased to meet requirement. For the same OFR, M+R\\_1 needs much less resource, e.g., for $2\\%$ OFR and MP=1, M+R\\_1 needs $38\\%$ less resource.\\label{capause}}{\\includegraphics[width=0.45\\linewidth]{capacityUse_failurerate.pdf}}\\hfill\n\t\\subcaptionbox{Rebidding overhead comparison with MP=1. X-axis is resource capacity, to the right is low contention with high capacity. M+R\\_1 reduces rebidding overhead by $32\\%$ on average. \\label{rebiddingbycapa}}{\\includegraphics[width=0.45\\linewidth]{rebid=5_boxplot.pdf}}\\hspace{1em}\n\t\\subcaptionbox{Remote site resource utilization comparison with MP=1. M+R\\_1 utilizes resource by $18\\%$ more than RIAL in high contention. In low contention, capacity is less critical, utilization is similar. M+R\\_1 reduces the standard deviation in utilization by up to $21\\%$\\label{utilizationbycapa}}{\\includegraphics[width=0.45\\linewidth]{utilization_boxplot_site0_rebid=1.pdf}}\n\t\\vspace*{-0.2cm}\n\t\\caption{Performance comparison between: MALFOY+RIAL with immediate reward within 1 time step (M+R\\_1), and only RIAL, with maximum permitted rebidding (MP) of 1 and 5 times. (a): offloading failure rate (OFR) vs system resource capacity, (b): required capacity to reach given OFR requirement, (c): rebidding overhead, (d): resource utilization in varying levels of capacity}\n\t\\label{performancebycapa}\n\\end{figure*}\n\nWe test our approach in two steps. First, we comprehensively study the performance of active agents with MALFOY algorithm in a synthetic setup with a reward signal that becomes available to the agents at the end of every time step---this setup is the same as in \\cite{tan2022multi}, which is a special case of long-term reward maximization with reward interval of $1$ time step (denoted M+R\\_1). In this setup, we simplify the modeling of communication channel and vehicle mobility and focus on analyzing the effect of environmental parameters to the learning process---system resource capacity, maximum permitted number of rebiddings, and a large number of different service types.\n\nNext, we use a realistic setup with a delayed extrinsic reward signal every $2000$ time steps (denoted M+R\\_2000) to train and evaluate our model, showing the long-term effects of learning in the training environment. A generalizable model should be able to run in a different test environment without retraining and still achieve good performance. Therefore, to demonstrate our model's generalization properties, we initialize the agents with trained models from the training environment, and we run them again in the test environment without retraining. The two environments differ in number of vehicles, speed, arrival rate, traffic light phases, and system resource capacity; details are in Sec. \\ref{subsec:eval_real} and Fig. \\ref{generalization}.\n\nThe intrinsic reward signal includes immediate bidding outcome, payment, and system resource utilization; the extrinsic reward is the vehicle's cumulated gain from repeated auctions since the previous reward signal. In this setup, we keep the environmental parameters constant and model a 4-way traffic intersection, data transmission delay and vehicle mobility (i.e., speed, number of vehicles in range, traffic light phases, etc.).\n\n\\subsection{Synthetic setup}\n\\label{subsec:eval_hypo}\n\nIn this setup, we cover a wide range of hypothetical scenarios by varying parameters such as system capacity, service\/task types and number of rebidding. \\begin{inparaenum}[1)] \n\\item Task types by resource needs in time-resource units: F1: 3 units, and F2: 30 units. We assume that tasks can be executed on multiple CPUs, such that the processing time is the reciprocal of the resource amount allocated, and the product of processing time and resource amount is the constant value of resource needs. A simplification is the assumption of independence between two types of resources. If the duration calculated from the allocation of two resource types are different, we take the longer duration as the processing time.\n\\item Service types by deadline and probability: F1, $300$ms: $18.75\\%$; F1, $50$ms: $18.75\\%$; F2, $300$ms: $6.25\\%$; F2, $50$ms: $6.25\\%$; F1-F2, $300$ms: $18.75\\%$; F1-F2, $50$ms: $18.75\\%$; F2-F1, $300$ms: $6.25\\%$; F2-F1, $50$ms: $18.75\\%$. We predefine the distribution from which the service types are drawn. More detailed analysis of these hyperparameters is left to future work.\n\\item Service arrival rate per vehicle: randomized according to the MMPP, with our predefined parameters $\\lambda_\\text{high} \\in (0.48,0.6), \\lambda_\\text{low} \\in (0,0.12)$ and transition probabilities $p_\\text{high}=p_\\text{low}=0.6$. More detailed analysis of these hyperparameters is left to future work.\n\\item Capacity: $50$-$230$ resource units. \n\\item Maximum permitted rebidding: $1$ or $5$ times, respectively. \n\\item Vehicle count: constant at $30$. \n\\item Vehicle arrival rate: $0$, always in the system; speed: $0$. \n\\item Data size: uniform random between $2.4$-$9.6$kbit. \n\\item Uplink and downlink latency: $0$.\n\\item Extrinsic reward signal interval: $1$ time step.\n\\end{inparaenum}\n\n\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\begin{minipage}{\\linewidth}\n\t\t\\centering\n\t\t\\subcaptionbox{CDF of individual OFRs (capacity=70, MP=1): M+R\\_1 does not sacrifice individual OFR to improve system performance: the CDF curves of each vehicle's OFR moves to the left in both budget categories. M+R\\_1 is also fairer: vehicles with low budget reduced failure rate more than those with high budget. \\label{cdf}}{\\includegraphics[width=0.9\\linewidth]{allocation_highResCapa_budgets_vehicleFailureRateCdf.pdf}}\\hfill\n\t\t\\subcaptionbox{Tradeoff between backoff and bidding price, for tasks with long and short deadlines: an example with capacity=50, MP=5. Vehicles that bid low (high) use long (short) backoff. Vehicles learn to utilize backoff to overcome budget disadvantage. \\label{backoffprice}}{\\includegraphics[width=0.45\\linewidth]{backoff_capa=50_rebid=5_priceRange.pdf}}\\hspace{1em}\n\t\t\\subcaptionbox{The same backoff-price tradeoff in different capacity levels, an example with MP=5 and tasks with long deadline: backoff time decreases as capacity increases, but the tradeoff effect remains.\\label{backoffpricebycapa}}{\\includegraphics[width=0.45\\linewidth]{backoffBudget_highContention_longdeadline_rebid=5.pdf}}\n\t\t\\end{minipage}\n\t\t\\vspace*{-0.2cm}\n\t\t\\caption{Cumulative Distribution Function (CDF) of individual vehicles' offloading failure rates (OFR), backoff and price tradeoff}\n\t\t\\vspace*{-0.2cm}\n\t\t\\label{backoff}\n\t\\end{figure}\n\nAs demonstrated in Figures \\ref{successbycapa} and \\ref{capause}, our active agents adapted to an environment with delayed information and learned to better utilize computing site resources. Fig.\\ref{utilizationbycapa} shows how M+R\\_1 increases computing site utilization in high contention and reduces load variation. When more rebidding is permitted, low OFR can be achieved by trial-and-error, and the advantage of MALFOY's backoff strategy is limited. That is why higher MP reduces M+R's advantage over RIAL. However, trial-and-error comes with a cost: Fig.\\ref{rebiddingbycapa} compares the rebidding overhead used by both algorithms when MP=$5$. In high contention, both active and passive agents leverage on rebidding, and the difference in rebidding overhead is small. M+R's advantage becomes more significant as capacity increases. \n\nFig.\\ref{cdf} shows the cumulative probability of vehicles' individual OFRs. With MALFOY, as system overall OFR reduces, the individual OFRs reduce accordingly: the auction does not cause disadvantage to individual bidders. Moreover, vehicles with lower budget improve by a greater margin: they learn to utilize backoff mechanism to overcome their disadvantage in initial parameterization. Fig.\\ref{backoffprice} shows how vehicles learn to trade off between bidding price and backoff time. They are separated into two groups: a vehicle is in the ``low price'' group if it bids on average lower than the average bidding price of all vehicles; otherwise, it is in the ``high price'' group (here we analyze actual bidding prices instead of the predefined budgets). When service requests have a longer deadline, vehicles in both price groups learn to utilize longer backoff. ``low price'' vehicles always use longer backoff. Fig.\\ref{backoffpricebycapa} shows tradeoff is present in all capacity levels. \n\nTo summarize: Fig.\\ref{performancebycapa} demonstrate MALFOY's excellent overall system performance; Fig.\\ref{cdf} shows that system objective is aligned with individual objectives through incentivization (\\textbf{C1}), and especially in Fig.\\ref{backoff}, differently initialized agents learn to select the most advantageous strategy based on limited feedback signal (\\textbf{C2}). The capability to learn and behave accordingly makes our agents highly flexible in a dynamic environment. \n\n\\subsection{Realistic setup}\n\\label{subsec:eval_real}\n\n\t\\begin{figure*}[t]\n\t\t\\centering\n\t\t\\begin{minipage}{\\linewidth}\n\t\t\\centering\n\t\t\\subcaptionbox{Training environment: low contention with abundant resource, traffic phase=$10$-$40$s, low vehicle speed($10$km\/h), low arrival rate=($1\/2.2\\text{s}$), low variation in vehicle count($22$-$30$): OFR in training(left) and evaluation(right). M+R\\_2000 with long-term objective learns faster in training, and outperforms both M+R\\_1 with only short-term objective, and RIAL. \\label{general-train}}\n\t\t\t \t\t\t{ \\includegraphics[width=0.45\\linewidth]{performanceDraco0Capa30TrainNew_failureratebytime.pdf}\\hfill\n\t\t\t\t\t\t \\includegraphics[width=0.45\\linewidth]{performanceDraco0Capa30Eval_failureratebytime.pdf}\n\t\t\t\t\t\t}\\hfill\n\t\t\\subcaptionbox{Test environment: high contention with limited resource, traffic phase=$20$s, high vehicle speed($30$km\/h), high arrival rate($1\/1\\text{s}$), high variation in vehicle count($14$-$30$): vehicle count over time(left) and OFR(right). Vehicle count shows the volatility of the test environment. OFR performance of M+R\\_2000 with long-term objective is even more distinguishable from M+R\\_1 and RIAL. \\label{general-test2}}\n\t\t\t\t\t\t{ \\includegraphics[width=0.45\\linewidth]{vehicleCountByTime_test.pdf}\\hfill\n\t\t\t\t\t\t \\includegraphics[width=0.45\\linewidth]{performanceDraco0Capa20Test_failureratebytime.pdf}\n\t\t\t\t\t\t}\\hfill\n\t\t\\end{minipage}\n\t\t\\vspace*{-0.2cm}\n\t\t\\caption{Comparison of offloading failure rate (OFR) in training and test environments, between: MALFOY+RIAL with external reward delay of 2000 time steps (M+R\\_2000), MALFOY+RIAL with immediate reward (M+R\\_1), and RIAL only.}\n\t\t\\label{generalization}\n\t\\end{figure*}\n\nIn this setup, we adopt the data patterns of segmentation and motion planning applications extracted from various self-driving data projects \\cite{cordts2016cityscapes} and referenced from relevant studies \\cite{chen2017importance,broggi2014proud}. We also use Simulation of Urban Mobility (SUMO) \\cite{behrisch2011sumo} to create a more realistic mobility model of a single junction with a centered traffic light. Information of the junction is downloaded from open street map. Assuming 802.11ac protocol, we place the ACA unit in the middle of the graph and limit the edges to within 65m of the ACA unit. The net is with two lanes per street per direction, SUMO uniform-randomly creates a vehicle at any one of the four edges. Also, in the realistic setup, we consider a sparse and delayed reward signal with an interval of $2000$ time steps.\n\nParameters of the setup are as follows \\cite{cordts2016cityscapes,chen2017importance,broggi2014proud}: \\begin{inparaenum}[1)]\n\\item Task types: F1: $80$ units, and F2: $80$ units. \n\\item Service types and deadline: F1: $100$ms and F2: $500$ms. \n\\item Service arrival rate per vehicle: fixed at F1: every $100$ms, and F2: every $500$ms. \n\\item Capacity: $20$ in high contention, $30$ in low contention. \n\\item Maximum permitted rebidding: $1$. \n\\item Vehicle count: $14$-$30$ from simulated trace data. \n\\item Vehicle arrival rate: constantly at $1$ every $1$ or $2.2$ seconds; speed: $10$ or $30$ km\/h when driving. \n\\item Data size: uplink: F1: $0.4$Mbit, F2: $4$Mbit. Downlink: F1: $0$ (negligible), F2: $0.4$Mbit. \n\\item Latency: we take 802.11ac protocol that covers a radius of 65 meters, and assume maximum channel width of ca. $1.69$ Gbps. We model the throughput as a function of distance to the ACA unit: throughput=$-26 \\times \\text{distance} + 1690$ Mbps \\cite{shah2015throughput}. If there are $N$ vehicles transmitting data to the ACA unit, we assume that each gets $1\/N$ of the maximum throughput at that distance.\n\\item Extrinsic reward signal interval: $1$ or $2000$ time steps.\n\\end{inparaenum}\n\nAs mentioned in Sec.\\ref{subsubsec:servicerequest}, the uplink and downlink time, service request arrival rate and service deadlines are based on the requirements of semantic segmentation and motion planning applications. If the vehicle expects its position before service deadline to be out-of-range of the MEC, the service request is dropped without any performance measurement.\n\nHigher vehicle arrival rate and slower driving speed typically lead to high contention. By changing the arrival rate and speed in the simulation, we create high and low-contention scenarios alternatively. \n\nFor training, we set the traffic light phases to $10$-$40$s of green for each direction, alternatively. We train and test our active agents with MALFOY in low contention, with reward signal interval at $1$ and $2000$ time steps, denoted M+R\\_1 and M+R\\_2000. Fig.\\ref{general-train}-left shows that M+R\\_1 converges to OFR of $1.4$\\%, and M+R\\_2000 converges much faster to an even lower failure rate. Then we evaluate the trained models in the same environment with newly simulated trace data from SUMO (Fig.\\ref{general-train}-right), M+R\\_1 still reaches OFR of $4$\\%, a reduction of $18\\%$ compared to RIAL; M+R\\_2000 further reduces failure rate by $34\\%$, compared to M+R\\_1.\n\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\begin{minipage}{0.95\\textwidth}\n \\centering\n\t\t\\includegraphics[width=0.6\\textwidth]{sensitivity3d.pdf}\\hfill\n\t\t\\end{minipage}\n\t\t\\vspace{-0.2cm}\n\t\t\\caption{Vehicles' individual OFR is not sensitive to private bid values $v$ and back-off cost $q$ (normalized).}\n\t\t\\label{sensitivity}\n\t\\end{figure}\n\nThen, we test (i.e.\\ without retraining) the trained MALFOY models in a significantly different environment, changing traffic light phases, vehicle arrival rate and speed to make the environment more volatile and dynamic, and reducing capacity to create a high-contention situation. The resulting vehicle count over time (Fig.\\ref{general-test2}-left) shows a much heavier and more frequent fluctuation compared to the original training environment. Note that vehicle count and OFR do not vary synchronously---OFR is determined by vehicle count and numerous other complicating factors such as transmission, queueing and processing time, past utilization, etc. Despite the significant changes to the environment, and without requiring any further training, M+R\\_1 reduces failure rate by $20\\%$ compared to RIAL, and M+R\\_2000 further reduces failure rate by $23\\%$ (Fig.\\ref{general-test2}-right). \n\nFig.\\ref{general-train} shows good convergence speed despite computation and communication complexity of the problem (\\textbf{C3}). Fig.\\ref{general-test2} shows that MALFOY has very good generalization properties---in fact, in the more volatile and dynamic environment, the superiority of active agents becomes more obvious. With the capability to predict long-term impacts of each action, MALFOY shows even better performance and generalization properties (\\textbf{C4}). With little need for retraining in a new environment, the computation delay is only the time for model inference. Test of inference time on a vehicle OBU is partially dependent on the hardware, therefore it is not the scope of this study.\n\nAdditionally, we randomize each vehicle's private bid values $v$ and the back-off cost $q$, to analyze how sensitive the individual offloading failure rate (OFR) is to changes in $v$ and $q$. Results show that the changes in $v$ and $q$ have almost no impact on the individual OFR; the Pearson coefficient values are $0.008$ (p-value=$0.5$) and $0.007$ (p-value=$0.5$), respectively. Fig. \\ref{sensitivity} visualizes this result. This and the results in Fig. \\ref{backoff} demonstrate the robustness of our auction mechanism: vehicles learn to compensate for differences in initial parameterization through trade-off in bidding price and backoff time, without impact on individual OFR.\n\nTo summarize: results in the synthetic setup show that, compared to only having a centralized load-balancing solution at the MEC, MALFOY succeeds in incentivizing each autonomous vehicle to add to the load-balancing effect in a distributed manner, which significantly increases resource utilization in high contention, and reduces capacity needed to reach the same service level. It achieves this by letting each vehicle independently decide how to trade off between backoff time and bidding price. Results in the realistic setup shows MALFOY's excellent generalization property in different realistic environments, making it a potential add-on to any existing centralized solutions at the MEC. A sensitivity analysis shows the robustness of our solution.\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nOur agents learn how to best utilize backoff option based on its initialization parameters. As a result, the agents achieve significant performance gains in very different environments. MALFOY can utilize long-term, sparse reward signals and has enhanced predictive power, as well as better alignment between short-term and long-term goals. When behaving long-term, it shows further performance improvements. Our interaction mechanism aligns private and system goals without sacrificing either user autonomy or system-wide resource efficiency, despite the distributed design with limited information-sharing.\n\nThe algorithm is therefore applicable to a wide range of distributed resource allocation problems in a dynamic and adversarial environment with no or very limited \\textit{a priori} information, large number of autonomous users with private goals, and large number of custom service requests. We find such applications in e.g., telecommunications, energy, Internet of Things, vehicular networks, cloud computing, etc.\n\nIn this paper, we fix the hyperparameters of the algorithm for the simulation, such as penalty costs related to backoff decisions and lost bids, each agent's preferences of long and short-term objectives, etc. A meta-learning algorithm that learns the best hyperparameters is left to future work. Besides, we assume there is no ``malicious'' agent with the goal to reduce social welfare or attack the system.\n\n\n\\section{Appendix}\n\\label{sec:appendix}\n\n\\subsection{Summary: theoretical results}\n\\subsubsection{Low contention}\n\\label{lowContention}\n\nWe show that in low contention, the interaction mechanism is a potential game with NE. We use the concept of potential functions to do so \\cite{monderer1996potential}:\n\n\\begin{defi} $G(I,A,u)$ is an exact potential game if and only if there exists a potential function $\\phi(A): A \\to \\mathbb{R}$ s.t. $\\forall i \\in I$, $u_i(b_i,b_{-i})-u_i(b'_i,b_{-i})=\\phi_i(b_i,b_{-i})-\\phi_i(b'_i,b_{-i}), b \\in A$.\n\\end{defi} \n\n\\begin{rem}\\label{potentialNE} Players in a finite potential game that jointly maximize a potential function end up in NE. \\end{rem}\n\n\\begin{proof} See \\cite{monderer1996potential}. \\end{proof}\n\n\\begin{thm} Bidders with utility as Eq.\\ref{eq:reward1} participate in a game as described in Sec.\\ref{sec:problem} in low contention, the game is a potential game, and the outcome is an NE.\\end{thm}\n\n\\begin{proof}\nIn low contention, $p_{i,k}=0$, as all bids are accepted. $u_i$ is reduced to: $u_i(\\alpha_i,\\alpha_{-i})=\\sum\\limits_{k}q_{i,k}-\\sum\\limits_{k}\\alpha_{i,k} q_{i,k} + W \\Big(1-\\sum_j \\alpha_j \\cdot \\frac{\\omega_j}{C}\\Big)$, where $-i$ denotes bidders other than $i$. $\\omega_j \\in \\mathbb{R}^{|K|}$ is each bid's resource requirement, $C$ is system capacity. Thus, the auction is reduced to a potential game with discrete action space $\\alpha_i \\in \\mathbb{R}^{|K|}$, and potential function $\\phi(\\alpha_i,\\alpha_{-i})=\\sum\\limits_{j, k}q_{j,k}-\\sum\\limits_{j,k} \\alpha_{j,k}q_{j,k} + W\\Big(1-\\sum_j \\alpha_j \\cdot \\frac{\\omega_j}{C} \\Big), \\forall i,j \\in I, \\forall k \\in K$. \n\nWe prove in Appendix \\ref{appendix:potentialGame} that $u_i(\\alpha_i,\\alpha_{-i})-u_i(\\alpha'_i,\\alpha_{-i})=\\phi(\\alpha_i,\\alpha_{-i})-\\phi(\\alpha'_i,\\alpha_{-i})$, and hence it is a potential game, and bidders maximizing their utilities $u_i$ also maximize the potential function $\\phi$. Since $\\alpha_i \\in \\mathbb R^{|K|}$, it is a finite potential game. According to Remark \\ref{potentialNE}, the outcome is an NE.\n\\end{proof}\n\nIn low contention, our computation offloading problem becomes a potential game. This enables us to use online learning algorithms such as in \\cite{perkins2014game} that converge regardless of other bidders' behaviors. The NE is a local maximization of the potential function: each bidder finds a balance between its backoff cost and the incentive to reduce overall utilization. Empirical results in Sec.\\ref{sec:eval} confirm that over time this results in a more balanced load.\n\n\\subsubsection{High contention}\n\\label{highContention}\n\nIn high contention, $\\alpha$ is used in a repeated auction to avoid congestion and ensure better reward over time. To simplify the proofs, we consider only the time steps where $\\alpha=1$ (bidder joins auction). We also take a small enough $W$, such that the last term in Eq.\\ref{eq:reward1} can be omitted in high contention, to further simplify the utility function in the proof.\n\n\\begin{thm}\\label{thm:spa} In a second-price auction, where bidders with utility as Eq.\\ref{eq:reward1} compete for service slots as commodities in high contention, \\begin{inparaenum}[1)] \\item bidders' best-response is of linear form, \\item the outcome is an NE and \\item welfare is maximized.\\end{inparaenum}\n\\end{thm}\n\n\\begin{proof} See Appendix \\ref{appendix:SPAwithpenalty}.\\end{proof} \n\nWhen bidders bid for service slots, the required resources are allocated. Theorem \\ref{thm:spa} guarantees the maximization of welfare (total utility of bidders), but it does not guarantee the optimality of the resource allocation, unless the following conditions are met: if bidders' valuation of the commodity is linear to its resource requirement, and all bidders have some access to resources (fairness). \n\n\\begin{cor}\\label{pareto} In a second-price auction, where $M$ bidders with utility as Eq.\\ref{eq:reward1} compete in high contention, the outcome is an optimal resource allocation, if the bidders' valuation of commodities is linear to resource requirement and all bidders have a positive probability of winning.\n\\end{cor}\n\n\\begin{proof} See Appendix \\ref{appendix:paretoOptimal}.\\end{proof} \n\nOur setup meets both conditions.\n\n\\subsection{Proof of potential game}\n\\label{appendix:potentialGame}\n\n\\begin{proof} \n\nWe define player $i$'s utility as $u_i(\\alpha_i,\\alpha_{-i})=\\sum\\limits_{k \\in K}q_{i,k}-\\sum\\limits_{k \\in K}\\alpha_{i,k} q_{i,k} + W \\Big(1-\\frac{\\sum_j \\alpha_j \\cdot \\omega_j}{C}\\Big)$, where $\\omega_j \\in \\mathbb{R}^K$ is the resource requirement of each commodity, $C$ is the system capacity. \n\nWe define potential function: $\\phi(\\alpha_i,\\alpha_{-i})=\\sum\\limits_{j \\in I, k \\in K}q_{j,k}-\\sum\\limits_{j \\in I, k \\in K} \\alpha_{j,k}q_{j,k}+W\\Big(1-\\frac{\\sum_j \\alpha_j \\cdot \\omega_j}{C} \\Big)$. \n\nTo simplify, we substitute with $Q_i=\\sum\\limits_{k\\in K}q_{i,k}$, $A_i=\\sum\\limits_{k \\in K}\\alpha_{i,k} q_{i,k}$, $A_{-i} = \\sum\\limits_{j \\in I, j \\neq i, k \\in K}\\alpha_{j,k} q_{j,k}$, $B_i=\\sum\\limits_k \\alpha_{i,k} \\omega_{i,k}$, $B_{-i}=\\sum\\limits_{j \\in I, j \\neq i, k \\in K}\\alpha_{j,k} \\omega_{j,k}$, and rewrite: $u_i(\\alpha_i,\\alpha_{-i})=Q_i-A_i+W-\\frac{W}{C}(B_i+B_{-i})$, $u_i(\\alpha'_i,\\alpha_{-i})=Q_i-A'_i+W-\\frac{W}{C}(B'_i+B_{-i})$, $\\phi(\\alpha_i,\\alpha_{-i}) = \\sum\\limits_j Q_j-(A_i+A_{-i})+W-\\frac{W(B_i+B_{-i})}{C}$, $\\phi(\\alpha'_i,\\alpha_{-i}) = \\sum\\limits_j Q_j-(A'_i+A_{-i})+W-\\frac{W(B'_i+B_{-i}) }{C} \\implies u_i(\\alpha_i,\\alpha_{-i})-u_i(\\alpha'_i,\\alpha_{-i}) =-(A_i-A'_i)-\\frac{W}{C}(B_i-B'_i) =\\phi(\\alpha_i,\\alpha_{-i})-\\phi(\\alpha'_i,\\alpha_{-i})$\n\\end{proof}\n\nSince $\\alpha_i \\in \\mathbb R^{|K|}$, the game under low contention is a finite potential game.\n\n\\subsection{Second-price auction}\n\\label{appendix:SPAwithpenalty}\n\nUnder high contention, as defined in Sec.\\ref{payment}, $u_i$ is reduced to: \\begin{flalign}\\label{eq:ui}\nu_i= \\sum\\limits_{k \\in K} \\Big(z_{i,k} \\cdot (v_{i,k}-p_{i,k})-(1-z_{i,k}) \\cdot c_{i,k} \\Big)\n\\end{flalign}\n\nWe prove the theorem for $|M|=2$ and $|K|=1$. It is an extension from \\cite{sun2006wireless}. Unlike \\cite{sun2006wireless}, we include in utility definition the second-price payment and cost for losing a bid. Based on \\cite{sun2006wireless}, it can also be easily extended to multiple bidders. \n\n\\subsubsection{Basic model}\n\n$2$ bidders receive continuously distributed valuations $v_i \\in [l_i,m_i], i \\in\\{1,2\\}$ for $1$ commodity, and choose their strategies $f_1(v_1),f_2(v_2)$ from the strategy sets $F_1$ and $F_2$. The resulting NE strategy pair is $(f_1^*, f_2^*)$. Any strategy function $f(v)$ is increasing in $v$, with $f_1(l_1)=a$, and $f_1(m_1)=b$. We also assume the users have budgets $(B_1,B_2)$, and that they cannot bid more than the budget. We define cost for losing the bid $c_i$.\nFurthermore, we define the inverse function of $f_1(v_1)$ to be: $h_1(y_1)= l \\text{, if } y_1\\leq a_1 \\text{, } h_1(y_1)=f_1^{-1}(y_1) \\text{, if } a_10$, $\\exists \\delta>0$ s.t.: if $\\psi_2 \\in \\Psi$ and $||\\psi_2-\\psi_1||<\\delta$, then $S(\\psi_2) \\subset B_\\epsilon(S(\\psi_1))$, where $B_\\epsilon(x)$ denotes the $\\epsilon$-ball around $x$. Correspondence $S$ is lower hemicontinuous, if for any open set $U \\subset \\Xi$ with $S(\\psi_1) \\cap U \\neq \\emptyset$, $\\exists \\epsilon>0$, s.t. $\\forall \\psi_2 \\in B_\\epsilon(\\psi_1)$, $S(\\psi_2)\\cap U \\neq \\emptyset$.\\end{defi}\n\n\\begin{lem}\\label{lem:s1_upperhemi} let bidder 2's feasible strategies $j_2$ be in a set $\\Psi$, let bidder 1's strategies $A=S_1(j_2),j_2 \\in \\Psi$ be in a set $\\Xi$. The correspondence: $S_1: \\Psi \\to \\Xi$ is continuous at all $j_2$.\n\\end{lem}\n\n\\begin{proof}\n$\\forall j_2 \\in \\Psi$, and a $\\epsilon$-ball around $S_1(j_2)$, we can find a range $\\delta$ around $j_2$, s.t. any $j'_2 \\in \\Psi, ||j'_2-j_2||< \\delta$, has $S_1(j'_2)$ within the $\\epsilon$-ball around $S_1(j_2)$. This is apparent, since for any given best response parameter $j'_2$ in the neighborhood of $j_2$, the corresponding strategy set in $S_1(j_2)$ would be a set of $j'_1$ that is in the neighborhood of $j_1$ (upper hemicontinuous). It is proven in \\cite{dutta1989maximum} that if the graph $G(S_1)$ is convex when $S_1(j_2)$ is monotone increasing, then $S_1$ is lower hemicontinuous. In our case, due to the linear form, and according to Lemma \\ref{lem1.1}, $S_1$ is lower hemicontinuous. Therefore, $S_1$ is continuous \\cite{dutta1989maximum}. \n\\end{proof}\n\n\\begin{thm}[Berge's maximum theorem \\cite{ok2007real}]\\label{berge} Let $\\Xi,\\Psi$ be topological spaces, $u_1:\\Xi \\times \\Psi \\to \\mathbb R$ be a continuous function on the product space, and $S_1: \\Psi \\to \\Xi$ be a compact-valued correspondence s.t. $S_1(j_2) \\neq \\emptyset$, $\\forall j_2 \\in \\Psi$. Define $u_1^*(j_2)=\\sup\\{u_1(j_1,j_2):j_1 \\in S_1(j_2)\\}$, $\\sup$ being the maximum operator of $u$, and the set of maximizers $S_1^*:\\Psi \\to \\Xi$ by:\n$S_1^*(j_2)=\\arg \\sup\\{ u_1(j_1,j_2):j_1 \\in S_1(j_2)\\} = \\{j_1 \\in S_1(j_2): u_1(j_1,j_2)=u_1^*(j_2)\\}$. If $S_1$ is continuous (i.e., both upper and lower) at $j_2$, then $u_1^*$ is continuous and $S_1^*$ is upper hemicontinuous with nonempty and compact values. \\end{thm}\n\n\\begin{lem}\\label{lem2.2} Correspondence $\\varphi: S_1 \\to 2^{S_1}$, where $\\varphi(S_1)=S_1^*=\\mathbf b_1$, is upper hemicontinuous with non-empty and compact values, and has a closed graph. \\end{lem}\n\n\\begin{proof}\nAccording to \\ref{berge}, since $S_1$ is continuous (Lemma \\ref{lem:s1_upperhemi}), non-empty and compact (Lemma \\ref{lem1.1}), the correspondence $\\varphi$ is upper hemicontinuous with non-empty and compact values. It is apparent that best response set is a closed subset of the strategy set $S$ on all $s \\in S$. Therefore $b_i$ is closed-valued. A closed-valued upper hemicontinuous correspondence has a closed graph.\n\\end{proof}\n\nLemmas \\ref{lem1.1}, \\ref{lem2.3} and \\ref{lem2.2} apply to the strategy sets of all players. According to the lemmas, we can prove that our setup meets the conditions of Theorem \\ref{kakutani}, therefore the game has NE.\n\n\\subsection{Pareto optimality}\n\\label{appendix:paretoOptimal}\n\nValuation of the service request is a linear function of the resource needed: $v_1=g_1 \\omega_1 + k_1,v_2=g_2 \\omega_2+k_2$, $g,k$ are constants, $\\omega$ is amount of resource required. The allocation rule under NE is: $A^*_{v_1,v_2}=1 \\text{, if } j_1 v_1 + d_1 \\geq j_2 v_2 + d_2 \\text{, otherwise } 2$. Form of the condition is from best response form in appendix Sec.~\\ref{appendix:ne}. We also assume that both bidders have at least some access to the resources, as a form of fairness. We define the fairness constraint to be: $\\mathbb{E}[\\omega_1|_{A_{v1,v2}=1}] \/ \\mathbb{E}[\\omega_2|_{A_{v1,v2}=2}]=\\gamma \\in \\mathbb R_{>0}$.\n\n\\begin{thm}\nThe allocation $A^*_{v_1,v_2}$ maximizes overall resource allocation $\\omega_1+\\omega_2$, subject to the fairness constraint, when the valuations are linear functions of resources. Or, the NE of the game achieves optimal resource allocation.\n\\end{thm}\n\n\\begin{proof}\nFind the Lagrangian multiplier $\\lambda^*$ that satisfies the fairness constraint with NE allocation $A^*_{v_1,v_2}$. Define $g,k$ as: $g_1 = (1+\\lambda^*)\/j_1 \\text{ , } k_1 =-d_1\/j_1$, and $g_2 = (1-\\gamma \\lambda^*)\/j_2 \\text{ , } k_2 =-d_2\/j_2$. Then we can rewrite the allocation: $A^*_{\\omega_1,\\omega_2} = 1 \\text{, if } \\omega_1 (1+\\lambda^*) \\geq \\omega_2(1-\\gamma \\lambda^*) \\text{, otherwise } 2$. The rest of the proof is the same as in \\cite{sun2006wireless}.\n\\end{proof}\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMany of the concrete applications of mathematics in science and\nengineering eventually result in a problem involving linear operator equations.\nThis problem can be usually represented as a linear system of \nequations (for instance by discretizing an integral equation or because \nthe operator equation is already given on some sequence space) of\nthe form\n\\begin{equation}\n\\label{Axb}\nAx = b,\n\\end{equation}\nwhere $A$ is an infinite matrix $A = (a_{kl})_{k,l\\in \\bZ }$ and \n$b$ belongs to some Banach space of sequences. \nSolving linear equations with infinitely many variables is a problem\nof functional analysis, while solving equations with finitely many variables\nis one of the main themes of linear algebra. Numerical analysis bridges\nthe gap between these areas.\nA fundamental problem of numerical analysis is thus to find a\nfinite-dimensional model for~\\eqref{Axb} whose solution approximates the solution of the \noriginal infinite-dimensional problem with any desired accuracy. This problem often leads\nto delicate questions of stability and convergence. \n\nA simple and useful approach is the {\\em finite-section \nmethod}~\\cite{GF74,HRS01}. Let \n$$P_n b = ( \\dots , 0, b_{-n}, b_{-n+1}, \\dots ,\nb_{n-1}, b_n, 0, \\dots )$$ \nbe the orthogonal projection onto a $2n+1$-dimensional subspac\n. We set\n\\begin{equation}\n\\label{Axb1}\nA_n = P_n A P_n \\quad \\quad \\text{ and } \\quad \\quad b_n = P_n b \\, ,\n\\end{equation}\nand try to solve the finite system \n\\begin{equation}\n\\label{Axb2}\nA_n x_n = b_n \n\\end{equation}\nfor properly chosen $n$. The crucial question is then:\nWhat is the relation between the numerical solution $x_n$ and\nthe actual solution $x$?\n\nThis problem has been analyzed in depth for the case\nof convolution operators and Toeplitz matrices in the pioneering\nwork of Gohberg, e.g. see~\\cite{GF74}. Important generalizations and \nextensions in the Toeplitz setting can be found in~\\cite{BS83,BS90}.\nRabinovich et al.\\ derive necessary and sufficient conditions for the \nconvergence of the finite section method in terms of the so-called \nlimit operator~\\cite{RRS04}, which does not necessarily require any\nToeplitz structure. These conditions, while intriguing, are not\nalways easy to verify in practice.\n\nA general theory for the approximation by finite-section is based on the\npowerful methods of $C^{\\ast}$-algebras and has been developed by B\\\"ottcher,\nSilbermann, and coworkers, see for instance~\\cite{BS83,HRS01}.\nTheir framework leads to many attractive and deep results about \nthe applicability of the finite section method as well as other approximation \nmethods. William Arveson goes a step further and concludes\n that {\\em ``numerical problems involving infinite dimensional operators require \na reformulation in terms of $C^{\\ast}$-algebras''}~\\cite{Arv94}. However, $C^{\\ast}$-algebras \nhave some limitations. It was already pointed out in~\\cite{HRS01}\nthat $C^{\\ast}$-algebra techniques do not yield any information about the speed of\nconvergence of the finite section method. \nAn answer to this question \nis obviously not only of theoretical interest, but it is important\nfor real applications. For instance, we want to choose $n$ in~\\eqref{Axb2} large enough\nto get a sufficiently accurate solution, but on the other hand, $n$\nshould be small enough to bound the computational complexity which in\ngeneral is of order $\\mathcal{O} (n^3)$.\nTheorems about the speed of convergence will give a quantitative indication\nfor how increasing $n$ will impact the accuracy of the solution.\nSome results about the speed of convergence for the special case of Toeplitz \nmatrices can be found in~\\cite{Str98a,Str00,RRS01,GGK03}. \nIn~\\cite{GGK03} the \nconvergence in the $\\ell^p$-norm ($1\\le p<\\infty$) is analyzed.\n\n\nIn this paper we present a thorough analysis of the convergence of \nthe finite section method for positive definite matrices as well as \nfor non-hermitian ones. Specifically, we solve the following\nproblems. \n\n(a) We study the finite section method on weighted $\\ell\n^p$-spaces. If the input vector $b$ belongs to a weighted space $\\ell\n^p_m$, then, under suitable assumptions on the matrix \n$A$, the finite section method converges in the norm of $\\ell ^p_m $. \n\n(b) We obtain quantitative estimates for the rate of convergence of\n$x_n$ to $x$ in various weighted $\\ell^p$-norms. \n\n(c) We define a modified version of finite sections, the\n\\emph{non-symmetric finite section method}, and show that\nthis method converges also for non-symmetric matrices. The finite\nsection method for non-symmetric matrices raises a number of\nrather difficult questions and has motivated a large part of\n\\cite{HRS01}. Even for the classical case of Laurent operators\n(Toeplitz matrices) our approach enlarges considerably the class of\nmatrices to which the finite section method can be applied. \n\nAs we work with Banach spaces of sequences, the methods will be taken\nfrom the theory of $B^*$-algebras (involutive Banach algebras) instead\nof $C^*$-algebras which suit only Hilbert spaces. \nThe key property of the matrices $A$ is their off-diagonal decay; we\nwill rely heavily on recent results from \nthe theory of Banach algebras of matrices. In fact, an important\ntechnical part of our analysis is to establish a finite section\nproperty of infinite-dimensional matrix algebras.\n\n\nThe paper is organized as follows. In Section~2 we recall the well\nknown proof for the convergence of the finite section method for\npositive invertible matrices and take it as a model for more general\nstatements. In Section~3 we introduce several Banach algebras of\ninfinite matrices and collect their fundamental properties. Section~4\nis devoted to the notion of inverse-closedness and spectral invariance\nin Banach algebras and their relation to the finite section method. In\nSection~5 we establish the convergence of the finite section method on\nweighted $\\ell ^p$-spaces, in Section~6 we derive quantitative\nestimates. In Section~7 we investigate a version of the finite section method for\nnon-symmetric matrices, and in the final Section~8 we briefly discuss\nan application to wireless communications. \n\n\n\n\n\n\\section{Convergence of the finite section method}\n\nIt is well known that for positive definite matrices the \nfinite section method works in principle, see, e.g.,~\\cite{HRS01}. The proof is \ninstructive and exhibits what is necessary for an understanding of the \nfinite section method.\n\nRecall that if $\\mathcal{A} $ is an algebra, then the spectrum of an element\n$A \\in \\mathcal{A} $ is defined to be the set $\\sigma _{\\mathcal{A} } (A) = \\{ \\lambda\n\\in \\bC : (A-\\lambda I) \\, \\mathrm{is \\, not \\, invertible }\\}$. If the\nalgebra is $\\mathcal{B} (\\mathcal{H} )$, the bounded operators on some Hilbert space,\nwe usually omit the reference to the algebra and write simply $\\sigma\n(A)$ for the spectrum. For self-adjoint operators on $\\mathcal{H} $ we denote\nthe extremal spectral values of $\\sigma (A)$ by $\\lambda_-=\\min\\sigma(A)$ and\n$\\lambda_+=\\max\\sigma(A)$, so that $\\sigma (A) \\subseteq [\\lambda _-,\n\\lambda _+]$. \n\nWe will analyze the finite section method for multidimensional index sets\nof the form $\\mathbb Z^d$. \nTo that end we define the projection $P_n$ in dimension $d>1$.\nWe set $C_n = [-n,n]^d \\cap \\bZ^d $, the integer vectors in the cube of\nlength $2n$ centered at the origin. Then the projection $P_n$ is defined by \n$(P_ny)(k) = \\chi _{[-n,n]^d}(k) y(k) = \\chi _{C_n}(k) y(k)$ for $k\\in\n\\bZ^d $. \nThe range of $P_n$ is a subspace of $\\ell ^2 (\\bZ^d ) $ of dimension $(2n+1)^d$\nand will be identified with $\\bC ^{(2n+1)^d}$. The finite section is\nthen defined to be $A_n=P_nAP_n$. \nBy definition, $A_n$ is a (finite rank) operator acting on\n$\\ell ^2 (\\bZ^d ) $, but we often interpret $A_n$ as a finite $(2n+1)^d \\times (2n+1)^d$-matrix\nacting on $\\bC ^{(2n+1)^d}$. In particular, by $A_n ^{-1} $ we\nunderstand the inverse of this finite matrix, but clearly \n$A_n $ cannot be invertible on $\\ell ^2 (\\bZ^d ) $. \n\n\nWe mention that our results could also be formulated\nwith respect to other index sets. \n\\begin{tm} \\label{cstar}\n If $A$ is a positive and (boundedly) invertible operator on $\\ell ^2 (\\bZ ^d ) $,\n then $x_n $ converges to $x$ in $\\ell ^2 (\\bZ ^d ) $. \n\\end{tm}\n\n\\begin{proof}\n\\textbf{Step 1.} Since by hypothesis, $\\sigma (A) \\subseteq [\\lambda_-,\n\\lambda _+]\\subseteq (0,\\infty )$, we have \n$$\n\\lambda _- \\|P_n b\\|_2^2 \\leq \\langle AP_n b , P_n b \\rangle = \\langle\nA_n b, b \\rangle \\leq \\lambda _+ \\|P_n b \\|_2^2 \\, .\n$$\nConsequently on the invariant subspace $P_n \\ell ^2 (\\bZ ^d ) \\simeq \\bC ^{(2n+1)^d}$ \n$$\n\\sigma (A_n ) \\subseteq [\\lambda\n _-, \\lambda _+]\n$$\nindependent of $n$. In particular, each $A_n $ is invertible on $\\bC\n^{(2n+1)^d}$ and\n\\begin{equation}\n \\label{eq:fe1}\n \\sup _{n\\in \\bN } \\|A_n ^{-1} \\|_{op} \\leq \\lambda _-^{-1} = \\|A^{-1}\n \\|_{op} \\, .\n\\end{equation}\n\n\\textbf{Step 2.} Define an extension of $A_n $ by \n\\begin{equation}\n \\label{eq:1}\n \\widetilde{A_n} = A_n + \\lambda _+ (I-P_n ) \\, .\n\\end{equation}\n Then \n$\\sigma (\\widetilde{A_n} ) \\subseteq [\\lambda _-, \\lambda _+ ]$, and \n all matrices $\\widetilde{A_n} $ are\ninvertible on $\\ell ^2 (\\bZ ^d ) $. Furthermore, $\\widetilde{A_n} ^{-1} = A_n ^{-1} + \\lambda _+ ^{-1}\n(I-P_n )$ and $\\widetilde{A_n} $ converges to $A$ in the strong operator\ntopology. \n\n\n\\textbf{Step 3.} (Lemma of Kantorovich). Since\n\\begin{eqnarray}\n\\|\\widetilde{A_n} ^{-1} b - A^{-1} b \\|_2 &=&\\| \\widetilde{A_n} ^{-1} (A-\\widetilde{A_n} ) A^{-1} b \\|_2\n\\notag \\\\\n&\\leq & \\sup _n \\|\\widetilde{A_n} ^{-1} \\|_{op} \\, \\| (A-\\widetilde{A_n} ) A^{-1} b \\|_2 \\, ,\n\\end{eqnarray}\nthe strong convergence $\\widetilde{A_n} \\rightharpoonup A $ implies that $\\widetilde{A_n}\n^{-1} $ converges strongly to $A^{-1} $. \n\n\\textbf{Step 4.} Recall $A_n x_n = b_n $ and $Ax=b$. Then\n\\begin{eqnarray}\n \\|x-x_n \\|_2 &=& \\|A^{-1} b - A_n ^{-1} b_n \\|_2 = \\| A^{-1} b - A_n\n ^{-1} P_n b \\|_2 \\notag \\\\\n&\\leq &\\| (A^{-1} - \\widetilde{A_n} ^{-1} ) b\\|_2 + \\| \\widetilde{A_n} ^{-1} (b - P_n b) \\|_2 = \n\\operatorname{I} + \\operatorname{II} \\, .\n\\end{eqnarray}\nThe first term goes to zero by Step 3, and the second term\nis estimated by \n$$ \\operatorname{II} \\leq \\sup _n \\|\\widetilde{A_n} ^{-1} \\|_{op} \\, \\|b-P_n b\\|_2\n\\leq \\lambda _- ^{-1} \\|b-P_n b\\|_2$$\nand also goes to zero. \n\\end{proof}\n\nThe above theorem uses the $\\ell ^2 (\\bZ ^d ) $-norm, so this is the realm of \n$C^*$-algebra techniques, cf.\\ the work of B\\\"ottcher, Silbermann, \net al.~\\cite{BS83,HRS01}.\n\nSeveral questions arise naturally in the context of the finite section\nmethod:\\\\\n1. Does the finite section method also converge in other norms, e.g.,\nin weighted $\\ell ^p$-norms? \\\\\n2. Can we derive quantitative estimates? If the finite section method works, how fast\ndoes $x_n $ converge to $x$? What conditions on the matrix $A$ and\nthe input vector $b$ are\nrequired to quantify the rate of convergence $x_n \\to x$?\\\\\n3. What conditions and modifications are required (if any) to make the\nfinite section method work for matrices that are not hermitian? \n\nFor an answer of the first question, we make the following\nobservation: The simple argument above extends almost word by word,\nprovided we can show the \nfollowing properties: \n\\begin{itemize}\n\\item[(1)]\n Both $A$ and $A^{-1} $ are bounded on $\\ell ^p_m $, \n\\item[(2)] \n $\\sup _n \\|\\widetilde{A_n} ^{-1} \\|_{\\ell ^p_m \\to \\ell ^p_m}$ is\nfinite, and\n\\item[(3)]\n the finite sequences are dense in $\\ell ^p_m $. \n\\end{itemize}\nThe answers to the other two questions also revolve around the above\nobservation as well as on properties of certain involutive Banach algebras,\nwhich will be introduced in the next section.\n\n\\section{A Class of Banach Algebras of Matrices}\n\nTo understand the asymptotic behavior of the finite section method on \nBanach spaces,\nwe need to resort to Banach algebra methods. We first consider some\ntypical matrix norms that express various forms of off-diagonal decay. \nOur approach is partly motivated by some forms of off-diagonal decay\nthat is observed in various applications, such as signal and image \nprocessing, digital communication, and quantum physics. A different way\nof describing off-diagonal decay of matrices (and operators) is given\nby the notion of band-dominated operators~\\cite{RRS01}.\n\n\n\\textbf{Weights.} Off-diagonal decay is quantified by means of weight\nfunctions. \n A non-negative function $v$ on $\\bZ^d $ is called an {\\em admissible weight}\nif it satisfies the following properties:\n\\begin{itemize}\n\\item[(i)] $v$ is even and normalized such\nthat $v(0) = 1$.\n\\item[(ii)] $v$ is submultiplicative, i.e., $v(k+l)\\le v(k) v(l)$ for\n all $k,l\\in \\bZ^d$.\n\\end{itemize}\nThe assumption that $v$ is even assures that the corresponding Banach\nalgebra is closed under taking the adjoint $A^*$. The weight $v$ is said to \nsatisfy the \\emph{Gelfand-Raikov-Shilov (GRS) condition}~\\cite{GRS64}, if \n\\begin{equation}\n\\underset{n \\to \\infty}{\\lim} v(n k)^{\\frac{1}{n}} = 1 \\qquad\n\\text{for all $k\\in \\bZ^d $}.\n\\label{GRS}\n\\end{equation}\nThis property is crucial for the inverse-closedness of Banach\nalgebras, see Theorem~\\ref{inverseclosed} below.\nThe standard weight functions on $\\bZ^d $ are of the form\n$$v(x) = e^{a d (x)^b} (1+d(x))^s \\, , \n$$ \nwhere $d(x)$ is a norm on $\\bR^d $. Such a weight is submultiplicative, when \n$a,s \\geq 0$ and $0\\leq b \\leq 1$; $v$ satisfies the GRS-condition,\nif and only if\\ $0\\leq b < 1$.\n\n\n\n\n\nConsider the following conditions on matrices. \n\n\n1. \\emph{The Jaffard class} is defined by polynomial decay off the\ndiagonal. Let $\\mathcal{A} _s$ be the class of matrices $A= (a_{kl}), k,l \\in\n\\bZ^d$, such that\n\\begin{equation}\n \\label{eq:m5}\n |a_{kl}| \\leq C (1+|k-l|)^{-s} \\quad \\quad \\forall k,l \\in \\bZ^d \n\\end{equation}\nwith norm $\\|A\\|_{\\mathcal{A} _s} = \\sup _{k,l \\in \\bZ^d } |a_{kl} |\n(1+|k-l|)^{s}$. \n\n\n2. \\emph{More general off-diagonal decay.}\nLet $v$ be an admissible weight on $\\bZ^d $ that satisfies the\nfollowing additional conditions: $v^{-1} \\in \\ell ^1(\\bZ^d )$ and $v^{-1}\n\\ast v^{-1} \\leq C v^{-1} $ ($v$ is called \\emph{subconvolutive}). We define the\n Banach space $\\mathcal{A} _v $ by the norm\n \\begin{equation}\n \\label{eq:s12}\n \\|A\\|_{\\mathcal{A} _v } = \\sup _{k,l \\in \\bZ^d} |a_{kl}| v(k-l) \\, ,\n \\end{equation}\n\n\n\n3. \\emph{Schur-type conditions.}\nLet $v$ be an admissible weight. The class $\\mathcal{A} ^1_v $ consists of all\nmatrices $A = (a_{kl})_{k,l \\in \n \\bZ^d}$ such that\n \\begin{equation}\n \\label{eq5}\n \\sup_{k\\in \\bZ^d} \\sum _{l\\in \\bZ^d} |a_{kl}| \\, v(k-l) < \\infty\n\\quad \\text{ and } \\quad \\sup_{l\\in \\bZ^d} \\sum _{k\\in \\bZ^d} |a_{kl}| \\,\nv(k-l) < \\infty\n \\end{equation}\nwith norm\n\\begin{equation}\n \\label{eq7}\n \\| A \\|_{\\mathcal{A} ^1_v } = \\max \\big\\{ \\sup _{k\\in \\bZ^d } \\sum _{l \\in \\bZ^d} |a_{kl}|\n v(k-l) \\, , \\, \\sup _{l\\in \\bZ^d } \\sum _{k \\in \\bZ^d} |a_{kl}|\n v(k-l)\\big\\} \\, .\n\\end{equation}\n\n\n4. \\emph{The Gohberg-Baskakov-Sj\\\"ostrand class.} For any admissible\nweight $v$ we define the class $\\mathcal{C} _v $\nas the space of\n all matrices $A = (a_{kl})_{k,l \\in \\bZ^d }$ such that the norm\n \\begin{equation}\n \\label{eq:17}\n \\|A\\|_{\\mathcal{C} _v} := \\sum _{l\\in \\bZ^d } \\sup _{k\\in \\bZ^d } |a_{k,k-l}|\\, v(l)\n \\end{equation}\nis finite. An alternative way to define the norm on $\\mathcal{C} _v$ is\n\\begin{equation}\n\\|A\\|_{\\mathcal{C} _v} = \\inf \\{ \\|\\alpha \\|_{ \\ell ^1 _v } : |a_{kl} | \\leq\n\\alpha (k-l) \\} \\, .\n\\label{eq:17a}\n\\end{equation}\n\n5. A further generalization is due to Sun~\\cite{Sun05}. Roughly speaking, Sun's\nclass amounts to an interpolation between $\\mathcal{C} _v $ and $\\mathcal{A} _v$ or\nbetween $\\mathcal{A} ^1_v $ and $\\mathcal{A} _v $. Our results also hold for Sun's class, but\nto avoid a jungle of indices, we stick to the simple classes\ndefined above and leave the reformulation of our results in Sun's case\nto the reader. \n\n\nThese Banach spaces of matrices have the following elementary properties.\n\n\\begin{lemma} \\label{bound}\n Let $v$ be an admissible weight and $\\mathcal{A} $ be one of the algebras\n $\\mathcal{A} _s$ for $s>d$, $\\mathcal{A} _v , \\mathcal{A} ^1_v , \\mathcal{C} _v \n $. Then $\\mathcal{A} $ has the following properties: \n\n(a) Both $\\mathcal{A} _v^1$ and $\\mathcal{C} _v$ are involutive Banach algebras (i.e.,\n$B^{\\ast}$-algebras) with the norms defined in~\\eqref{eq5} and \\eqref{eq7}. \n$\\mathcal{A} _v$ and $\\mathcal{A} _s, s>d$ can be equipped with an equivalent norm so \nthat they become involutive Banach algebras. \n\n(b) If $A \\in \\mathcal{A} $, then $A $ is bounded on $\\ell ^2 (\\bZ ^d ) $. \n\n(c) If $A\\in \\mathcal{A} $ and $|b_{kl}|\\leq |a_{kl}|$ for all $k,l \\in \\bZ^d\n$, then $B\\in \\mathcal{A} $ and $\\|B\\|_{\\mathcal{A} } \\leq \\|\\mathcal{A} \\|_{\\mathcal{A}}$. ($\\mathcal{A} $ is\na \\emph{solid} algebra). \n\\end{lemma}\n\n\\begin{proof}\nProperties (a) and (c) are easy and follow directly from the\ndefinition of the matrix norms. The statements about $\\mathcal{A}_s$ \nand $\\mathcal{A}_v$ are proven in~\\cite{GL04}.\n(b) is a consequence of Schur's test.\n\\end{proof}\n\nNext we study the spectrum of matrices belonging to one of these\nBanach algebras. \n\\begin{definition}\\label{definverse}\nWe say that $\\Cal A$ is inverse-closed in $\\mathcal{B} (\\ell ^2(\\bZ^d ))$, \nif for every $A\\in \\Cal A$ that is invertible on ${\\ell}^2(\\bz)$ we have that $A^{-1}\\in \\Cal A$.\n\\end{definition} \n\nOur next theorem states that the matrix algebras introduced above are\ninverse-closed as long as $v$ satisfies the GRS-condition. The precise formulation is slightly more complicated,\nbecause we need to be a bit pedantic about the weights. \n\n\\begin{tm}[Inverse-closedness]\n\\label{inverseclosed}\n Let $v$ be an admissible weight that satisfies the\n GRS-condition, i.e., $\\lim _{n\\to \\infty } v(nk)^{1\/n} = 1$ for all $k\\in \\bZ^d $. \n\n\n\n(a) Assume that $v^{-1} \\in \\ell ^1(\\bZ^d )$ and $v^{-1}\n\\ast v^{-1} \\leq C v^{-1}$,\nthen $\\mathcal{A} _v $ is inverse-closed in $\\mathcal{B} (\\ell ^2 (\\bZ ^d ) )$. \nIn particular $\\mathcal{A} _s$ for $s>d$ possesses this property. \n\n(b) If $v(k) \\geq C (1+|k|)^\\delta $ for some $\\delta >0$, then $\\mathcal{A} ^1_v\n$ is inverse-closed in $\\mathcal{B} (\\ell ^2 (\\bZ ^d ) )$. \n \n(c) $\\mathcal{C} _v$ is inverse-closed in $\\mathcal{B} (\\ell ^2 (\\bZ ^d ) )$ for arbitrary\nadmissible weights with the GRS-property. \n\\end{tm}\n\n\n\\begin{remark} \nThe inverse-closedness is the key property and lies rather\ndeep. While for $C^{\\ast}$-(sub)algebras this property is inherent, for Banach\nalgebras it is always hard to prove. Inverse-closedness for $\\mathcal{A} _s$ is\ndue to Jaffard~\\cite{Jaf90} and Baskakov~\\cite{Bas90,Bas97}, a simple proof \nis given in ~\\cite{Sun05}. For $\\mathcal{A} _v $ it was\nproved by Baskakov~\\cite{Bas97} and reproved in a different way in\n~\\cite{GL04}. The result for $\\mathcal{C} _v$ with $v\\equiv 1$ is due to\nGohberg-Kasshoek-Wordeman~\\cite{GKW89} and was rediscovered by\nSj\\\"ostrand~\\cite{Sjo95}, the case of arbitrary weights is due to\nBaskakov~\\cite{Bas97}, the algebra $\\mathcal{A} ^1_v $ was treated by one of us with\nLeinert~\\cite{GL03}. More general conditions were announced by\nSun~\\cite{Sun05}. \n\\end{remark} \n\n\nThe following properties are well-known consequences of\ninverse-closedness. \n\n\\begin{cor}[Spectral invariance] \\label{closedgraph}\nLet $\\mathcal{A} $ be one of the algebras $\\mathcal{A} _s$, $\\mathcal{A} _v $, $\\mathcal{A} ^1_v $, or $\\mathcal{C}\n_v$ and assume that $v$ satisfies the conditions of\nTheorem~\\ref{inverseclosed}. \nThen \\\\\n(a) $\\sigma _{\\mathcal{A} } (A) = \\sigma (A) $ (the spectrum in the algebra\n$\\mathcal{A} $ coincides with the spectrum of $A$ as an operator on $\\ell ^2 (\\bZ ^d )$) \n \n(b) If $A$ is bounded on $\\ell ^p_m $ for all $A \\in \\mathcal{A} $, then the\noperator norm satisfies\n\\begin{equation}\n\\label{lpbound}\n\\|A\\|_{\\ell ^p_m \\to \\ell ^p_m} \\leq C \\| A\n\\|_{\\mathcal{A} } \\quad \\text{for all $A \\in \\mathcal{A}$,}\n\\end{equation}\nand \n$$\n\\sigma _{\\ell ^p_m } (A) \\subseteq \\sigma (A) \n$$\n(the spectrum is almost independent of the\nspace $A$ acts on).\n\\end{cor}\n\\begin{remark}\nStatement (a) is equivalent to inverse-closedness, the norm\nestimate in (b) follows from the closed graph theorem, the inclusion\nof the spectra is an immediate consequence of\nTheorem~\\ref{inverseclosed}. \n\\end{remark}\n\n\nLet us emphasize that in our analysis of the finite section method we\nonly need that the algebra $\\mathcal{A} $ acts boundedly on $\\ell ^p_m $. In order\nto understand how the weight $m$ depends on the submultiplicative weight\nused to parametrize the off-diagonal decay, let us briefly discuss some\nsufficient conditions for the bounded action of $\\mathcal{A} $ on $\\ell ^p_m $. The\nweights $m$ satisfy slightly different conditions.\nLet $v$ be an admissible weight. The class of $v$-moderate weights is \n\\begin{equation}\n\\mathcal{M} _v = \\Big\\{m\\ge 0: \\underset{k\\in \\bZ^d}{\\sup} \n\\frac{m(k+l)}{m(k)} \\le Cv(l), \\quad \\forall \\,l \\in \\bZ^d \\Big\\}.\n\\label{moderateweights}\n\\end{equation}\nFor example, if $a, s \\in \\bR$ are arbitrary, then $m(x) = e^{a d (x)^b} (1+d(x))^s$ is $e^{|a|\n d (x)^b} (1+d (x))^{|s|}$-moderate. \n\nThe explicit examples of Banach algebras discussed above \nall act on the entire range of $\\ell ^p_m $ for\n$1\\leq p \\leq \\infty $ and a family of moderate weights associated to\n$v$. The following lemma provides some explicit sufficient conditions \non $m$ for $\\mathcal{A} _v$, $\\mathcal{A} ^1_v$ or $\\mathcal{C} _v$ to act boundedly on $\\ell ^p_m $.\n\n\\begin{lemma}\\label{lembound}\nLet $v$ be an admissible weight. \n\n (a) If $A \\in \\mathcal{A} ^1_v $, then $A$ is bounded simultaneously on all $\\ell\n ^p_m (\\bZ^d )$ for $1 \\leq p \\leq \\infty $ and $m \\in \\mathcal{M} _v$. \n\n(b) If $A \\in \\mathcal{A} _v$ and $v_0(k) :=v(k)\/ (1+|k|)^s $ is\nsubmultiplicative for some $s>d$, then $A$ is bounded simultaneously on all $\\ell\n ^p_m (\\bZ^d )$ for $1 \\leq p \\le \\infty $ and $m \\in \\mathcal{M} _{v_0}$.\n\n\n(c) If $A \\in \\mathcal{A} _v$, then $A$ is bounded on $\\ell ^\\infty _v (\\bZ^d )$. \n\n(d) If $A\\in \\mathcal{C} _v$, then $A$ is bounded on all $\\ell ^p _m (\\bZ^d )$\nfor $1 \\leq p \\leq \\infty $ and $m \\in \\mathcal{M} _v$. \n\\end{lemma}\n\n\\begin{proof}\nFor completeness we sketch the easy proof.\n\n(a) First, let $p=1$, $c\\in \\ell^1_m(\\bZ^d)$ and $A\\in\\mathcal{A} ^1_v$. Then, since $m(k)\\le C v(k-l)m(l)$, we obtain\n\\begin{align*}\n\\|Ac\\|_{\\ell^1_m} & = \\sum_{k\\in\\bZ^d} \\Big|\\sum_{l\\in\\bZ^d} a_{kl} c_l \\Big|m(k) \\le\n C \\sum_{k\\in\\bZ^d} \\sum_{l\\in\\bZ^d} |a_{kl}|\\, |c_l| v(k-l)m(l) \\\\ \n& \\le C\\sum_{l\\in\\bZ^d} \\Big(\\sup_{l\\in\\bZ^d}\\sum_{k\\in\\bZ^d} |a_{kl}| v(k-l)\\Big)|c_l|m(l)=C\\|A\\|_{\\mathcal{A} ^1_v}\n\\|c\\|_{\\ell^{1}_{m}}.\n\\end{align*}\nNext, let $p=\\infty$ and $c\\in\\ell^{\\infty}_{m}$. Then, as before\n\\begin{align*}\n\\|Ac\\|_{\\ell^\\infty_m} & = \\sup_{k\\in\\bZ^d} \\Big|\\sum_{l\\in\\bZ^d} a_{kl} c_l \\Big|m(k) \\le\n C \\sup_{k\\in\\bZ^d} \\sum_{l\\in\\bZ^d} |a_{kl}|\\, |c_l| v(k-l)m(l) \\\\ \n& \\le C\\Big(\\sup_{l\\in\\bZ^d} |c_l|m(l)\\Big)\\sup_{k\\in\\bZ^d}\\sum_{l\\in\\bZ^d} |a_{kl}| v(k-l)=\nC\\|A\\|_{\\mathcal{A} ^1_v}\\|c\\|_{\\ell^{\\infty}_{m}}.\n\\end{align*}\nThe boundedness on $\\ell^p_m(\\bZ^d)$ for $1